Replacing database names with variables in SSDT - visual-studio

We are using SSDT to create a build and deploy pipeline using Git and ADO. In order for the solution to build, all cross-database references must have a corresponding Database Reference in the project and must be referred to using a [$(DatabaseVariable)]. This means that every time a change is made and synched using Schema Compare, the database names have to be replaced either manually or with a find-and-replace. There are of course numerous downsides to the find-and-replace approach, including the fact that it will find and replace references in the project files that should not be database variables. I'm hoping someone knows of a way to automate this process that doesn't involve such a brute force method as find-and-replace.
I have searched around extensively for this and found nothing helpful.
Here's an example view that contains cross-database references:
CREATE view [Migration].[vwCHILDS_Allegation_Allegation]
as
with src as (
select
cast('' as nvarchar(50)) CEAllegationIdentifier
, (select AllegationTypeId from [$(CWNS_Migration)].Allegation.AllegationType where Code = dfrvmi1.DestinationDataFieldReferenceValueCode) AllegationTypeId
, cast(1 as int) SourceSystemId
, cast(src.IDNBR as nvarchar(64)) SourceSystemIdentifier
, src.IDNBR SourceSystemIdentifier_Numeric
, case when src.CRET_DT_TM = '0001-01-01' then null else src.CRET_DT_TM end SourceSystemCreatedDateTime
, case when src.MOD_DT_TM = '0001-01-01' then null else src.MOD_DT_TM end SourceSystemModifiedDateTime
, (
select
max(pe.PersonId)
from
[$(CWNS_Migration)].PersonIdentity.PersonIdentifier pe
join [$(CHILDSDB2)].VLCHA.STAFF_USERID st on cast(st.FK_STAFFFK_PERSID as nvarchar(64)) = pe.Identifier
and pe.PersonIdentificationSystemId = 4
where
st.USERID = ltrim(rtrim(src.MOD_USR_ID))) SourceSystemModifiedPersonId
from
[$(CHILDSDB2)].VLCHA.ALGTN src
left join [$(DataCatalog)].dbo.DataFieldReferenceValueMappingInfo dfrvmi1 on dfrvmi1.SourceDataFieldReferenceValueDataFieldId = 216
and dfrvmi1.SourceDataFieldReferenceValueCode = ltrim(rtrim(src.FK_ALGTN_PRIORICTG))
and dfrvmi1.DestinationDataFieldReferenceValueDataFieldId = 20605
)
select
src.*
from
src
left join [$(CWNS_Migration)].Allegation.Allegation tgt on tgt.SourceSystemId = src.SourceSystemId and tgt.SourceSystemIdentifier = src.SourceSystemIdentifier
left join [$(CWNS_Migration)].Quarantine.Allegation q on q.SourceSystemId = src.SourceSystemId and q.SourceSystemIdentifier = src.SourceSystemIdentifier
and q.QExecutionId = 1
where
q.QExecutionId is null
and (
isnull(src.AllegationTypeId, 0) <> isnull(tgt.AllegationTypeId, 0)
or isnull(try_cast(src.SourceSystemCreatedDateTime as datetime), '') <> isnull(tgt.SourceSystemCreatedDateTime, '')
or isnull(try_cast(src.SourceSystemModifiedDateTime as datetime), '') <> isnull(tgt.SourceSystemModifiedDateTime, '')
or isnull(src.SourceSystemModifiedPersonId, 0) <> isnull(tgt.SourceSystemModifiedPersonId, 0)
)
I'm also hoping there's a way to avoid having Schema Compare always show the variables as differences with the database.

Ideally you need to include all databases as the reference to your project (right click on the project and do Add database reference, or something like that). You can do that in 2 ways:
* Create project for each database and import all or only objects that being used by your database
* Extract dacpac from the live database and use it as a reference
Then create synonyms for every single cross database objects in your main database. For example if we have object [$(CWNS_Migration)].PersonIdentity.PersonIdentifier then you'll need to create synonym as
CREATE SYNONYM PersonIdentity.PersonIdentifier
FOR [$(CWNS_Migration)].PersonIdentity.PersonIdentifier;
and then use PersonIdentity.PersonIdentifier (2 part name) in your code instead of the 3-4 part names. That's the project setup.
Now, when you right click on the project and do "Publish", it will pop up the dialogue where you can specify the vale for the CWNS_Migration variable and change it to whatever you need. After that you have the option to save this settings (Save As) and the result of that save is called as "Publish Profile". So, for every environment you just need to create different publish profile and use it when publish your changes.
You might want to try to start with synonyms first, it might work (need to verify), but I would recommend to add database references anyway.

Related

Script to generate a select resolving all child tables?

I'm looking for a way to generate a script that would generate an SQL query that would select all child tables columns from a parent table.
Let's say you have a table Class (teacher, room, program) and a table Student (firstname, lastname, age, score, email).
Let's say you want to get a select of all students in Class.
Sure you could write the query manually.
But now imagine you have a complex table with dozens of child tables, how do you do this efficiently/programmatically ?
This is something that all programmers would like to have, no ?
I can't believe no one has ever done that.
I understand the answer may depend on the DBMS vendor, I'm personally looking for a solution for Oracle.
Questions that are a bit similar :
Oracle: Easy way to find names and/or number of child record tables
Postgres: select data from parent table and all child tables
And here is an idea to solve this partially : use a tool such as PowerBi or Visual Studio to generate Model from database in ASP.NET MVC. You won't get the SQL query but you will get the data.
You can start with this POC:
select
juc.table_name as parent_table,
/*
uc.table_name as child_table, uc.constraint_name, uc.r_constraint_name,
juc.constraint_type,
uccc.column_name as parent_col_name, uccc.position as parent_col_position,
uccp.column_name as child_col_name, uccp.position as child_col_position,
*/
'SELECT c.* FROM ' || juc.table_name || ' p JOIN ' || uc.table_name || ' c ON '
||
LISTAGG( 'c.' || uccp.column_name || ' = p.' || uccc.column_name, ' AND ' ) WITHIN GROUP(order by uccc.position)
as sql
from user_constraints uc
join user_constraints juc on juc.constraint_name = uc.r_constraint_name
join user_cons_columns uccc on uccc.constraint_name = uc.r_constraint_name
join user_cons_columns uccp on uccp.constraint_name = uc.constraint_name and uccc.position = uccp.position
where uc.constraint_type = 'R'
group by uc.table_name, juc.table_name, uc.constraint_name
;
You can create your own entity relationship model metadata and write PL/SQL that will traverse it and assemble SQL intelligently. I've done this myself to avoid having to hard-code SQL in my front-end apps. But it is highly complex and involves a lot of coding, far more than can be shared in a forum like this. But to give you the general gist, I have the following metadata tables that describe my model:
sql_statements - associates a logical entity with a primary table, and specifies the PK column.
sql_statement_parents - defines the parent entity and the child attribute used to join to the parent's PK.
sql_attribute_dictionary - lists every available attribute for every statement, the source column, its datatype, plus optional derived column expressions.
attribute_dependencies - used for derived column expressions, specifies which attributes are needed by the derived attribute.
Then you write code that takes a sql_statement name and a list of desired attributes and a set of optional filters, and it builds a list of needed source tables/columns using the data relationships in the metadata, and then using the parent-child relationships recursively builds SQL (using nested query blocks) from the child to whatever parent ancestor(s) it needs to obtain the required columns, intelligently aliasing everything and joining in the write way to be performant. It can then pass back the finished SQL as a REF CURSOR which you can then parse, open and fetch from to get results. It works great for me, but it did take weeks of work to perfect, and that's with decades of experience in SQL and PL/SQL. This is no simple task, but it is doable. And of course there are always complex needs that defy the capabilities of our metadata model, and so for those we end up either creating views or pipeline functions, and registering those in our metadata so that generated SQL can invoke them when needed.
But in the end, however you do it, you will not get away from having to describe your data model in detail so that code can walk it.

Showing converted Base64 (from hex) in an existing SQL Server 2019 view

I do voluntary work at an animal shelter. We have an application which uses a SQL Server 2019 database. I have created a view that includes a varbinary(max) column. The value in this column is a picture, stored in hexadecimal-format. I would like to convert this Hex-value to a base64-binary file and add these to the view as an extra column.
I found the perfect solution for my situation in SQL Server : hex to base64. The example provided converts 1 single hex-value into 1 base64-value. I now need to add this solution to my view, but I'm not having any success.
The offered solution:
DECLARE #TestBinHex varchar(max), #TestBinary varbinary(max), #Statement nvarchar(max);
SELECT #TestBinHex = '0x012345';
SELECT #Statement = N'SELECT #binaryResult = ' + #TestBinHex;
EXECUTE sp_executesql #Statement, N'#binaryResult varbinary(max) OUTPUT', #binaryResult=#TestBinary OUTPUT;
SELECT
CAST(N'' AS XML).value(
'xs:base64Binary(xs:hexBinary(sql:column("bin")))'
, 'VARCHAR(MAX)'
) Base64Encoding
FROM
(SELECT #TestBinary AS bin) AS bin_sql_server_temp;
A simplified version of my view:
SELECT
a.cat_id, a.catname, s.cat_id,
s.stay_id, s.shelter_handler, s.shelter_kennel, s.picture
FROM
dbo.animal AS a
OUTER APPLY
(SELECT TOP 1 *
FROM dbo.shelterdata
WHERE a.cat_id = s.cat_id
ORDER BY s.stay_id DESC) AS S
WHERE
(a.cat_id IS NOT NULL) AND (s.leave_date IS NULL)
The view shows an overview of all cats currently present in the shelter (leave_date is NULL). The reason for the TOP 1 is that sometimes shelter animals get returned, and the application then assigns a new stay_id. To prevent duplicate values from the join, I only return the value of the most recent stay_id.
What I am trying to achieve: the second table (dbo.shelterdata) includes the picture, stored in hex value. I'd like to add a column Base64Encoding to the view which includes the converted value.
My attempts
I was successful in replacing the static value '0x012345' by a SELECT statement. But the way the solution is formatted, it only allows for one input value. So I had to restrict it with a WHERE clause. It is obvious to me that I need to make a subquery which inputs the hex value based on the unique cat_id. However, it has been many years since I worked with variable, so I'm struggling with the formatting of the statement.
My request
Does anyone have a suggestion how to build the conversion into the view?
Any assistance would be greatly appreciated.
After searching for a few more hours, I stumbled onto the solution. Maybe it will help someone else in the future. The solution is remarkably simple, as is often the case.
My view, mentioned above, is called dbo.shelter_view
select sv.picture,sv.cat_id,
cast('' as xml).value(
'xs:base64Binary(sql:column("sv.picture"))', 'varchar(max)'
) as Base64Encoding
from dbo.shelter_view as SV

What is being deleted?

I'm going through some Visual Foxpro 7.0 which is new to me. I'm having a bit of trouble deciphering a DELETE command inside a DO WHILE with two different work areas.
I'm just not clear on what is actually being deleted. When the DELETE command is issued I'm using Work Area 1 but I'm not looping through any records. If the DELETE command IS being used against tblpay (Work Area 1) then it seems that it's deleting the record that was just inserted which makes no sense. Can someone clue me please?
select 1 (tblpay)
USE tblpay
select 2 (tblfac)
USE tblfac
GOTO top
DO WHILE NOT EOF()
lcfy = fy
lcindex_no = index_no
lcpca = pca
lnpys = padl(alltrim(str(cum_pys)),13,' ')
select 1 (tblpay)
LOCATE FOR fy = lcfy AND index_no = lcindex_no AND pca = lcpca
IF NOT FOUND()
INSERT INTO tblpay(exp_1,fy,exp_3,exp_4,exp_5,exp_6,index_no,exp_8,pca,cum_pys,reversal) ;
values('805',lcfy,SPACE(37),lcdoc_date,lccurdoc,'00',lcindex_no,'99802',lcpca,lnpys,'R')
DELETE
ENDIF
select 2 (tblfac)
SKIP
ENDDO
Admittedly the code you show is not very clear.
Some suggested changes:
select 1 (tblpay)
USE tblpay
should be
USE tblpay IN 0 && Open table tblpay into next available workspace
and
select 2 (tblfac)
USE tblfac
should be
USE tblfac IN 0 && Open table tblfac into next available workspace
Then you would no longer have to remember SELECT 1 or SELECT 2 - Now what did I have in #1 or #2?
Instead you would choose the table it by its Alias such as: SELECT tblpac
The rest of the code doesn't make much sense either.
* You choose table tblfac and SCAN its records for values
* Then you go to table tblpay and attempt to LOCATE one or more specific records
* If the tblpay ** record is NOT FOUND, you then use the values from **tblfac (and other info) and INSERT a new record into tblpay using a SQL Command (you could have also used the VFP commands: APPEND BLANK followed by a REPLACE)
* The DELETE that follows that will Delete the table record that it is currently pointing to - however the way your code is written it might not be what you want.
The way it looks, it seems like if you have NOT FOUND() a matching record in tblpay your record pointer is still pointing to that table, but it is now at the EOF() (End Of File) and not to any actual record. And an attempt to Delete it will not do anything.
In your VFP Development mode, you should use the Debug methods to actually 'see' which table table record pointer is 'looking' at and which record.
To do that you might want to use ** SET STEP ON** in the following manner.
IF NOT FOUND()
INSERT INTO tblpay(exp_1,fy,exp_3,exp_4,exp_5,exp_6,index_no,exp_8,pca,cum_pys,reversal) ;
values('805',lcfy,SPACE(37),lcdoc_date,lccurdoc,'00',lcindex_no,'99802',lcpca,lnpys,'R')
SET STEP ON && Added Here for Debug Purposes ONLY
DELETE
ENDIF
Then when you execute the code in the VFP Development mode and execution hits that line, it will Suspend execution and open the Debug TRACE window - thereby allowing you to investigate the record pointer, etc.
Good Luck
What Dhugalmac has said is partially correct but not entirely. If the record searched is not found, then you are inserting a record and then you are deleting that newly inserted record. Pointer is NOT at EOF but at that new record.
As Dhugalmac said, do not use work area numbers but aliases instead. Above code is not the real code, it wouldn't run without an error.
If you are using this code and the text it is in for learning, immediately stop reading it and dump it away. The code is terrible and doesn't sound to have a purpose (besides having errors).
If your intent is to learn how to delete from VB.Net, just use VFPOLEDB and a DELETE - SQL command with ExecuteNonQuery (just as you would do against SQL server, PostgreSQL, MySql ... any ANSI database). With VB.Net, most of the xbase commands have no place (neither those do while ... skip ...enddo - even you wouldn't use it from within VFP).

SQLite: how to enable counting number of rows modified from trigger

is there any way to enable counting of rows that trigger modified in SQLite?
I know it is disabled https://www.sqlite.org/c3ref/changes.html and i understand why, but can i enable it somehow?
CREATE TABLE Users_data (
Id INTEGER PRIMARY KEY AUTOINCREMENT,
Deleted BOOLEAN DEFAULT (0),
Name STRING
);
CREATE VIEW Users AS
SELECT Id, Name
FROM Users_data
WHERE Deleted = 0;
CREATE TRIGGER UsersDelete2UsersData
INSTEAD OF DELETE
ON Users
FOR EACH ROW
BEGIN
UPDATE Users_data SET Deleted = 1 WHERE Id = OLD.Id;
END;
-- etc for insert & update
then delete from Users where Name like 'foo' /* doesnt even need 'Id = 1' */; works fine, but numbers of modified rows is, as documentation say, always zero.
(I cant modify my DAL to automatically add "where Deleted = 0", so backup plan is to have table Users_deleted and 'on delete' trigger on Users table without any view, but then i have to keep tracking FKs (for example, what to do when someone delete from FK table) and so on...)
Edit: Returned number is used for checking on database concurrency.
Edit2: To be more clear: As i say, I can not modify my DAL (Entity Framework 6), so the preferred answer should operate as follow pseudo code: int affectedRow = query("delete from Users where Name like 'foo';").Execute();
Its all about SQLite "trigger on view" behavior.
Use sqlite3_total_changes() instead:
This function returns the total number of rows inserted, modified or deleted by all INSERT, UPDATE or DELETE statements completed since the database connection was opened, including those executed as part of trigger programs.
Its imposible in sqlite3 (in 2015).
Basically I was looking for instead of trigger on view (as in question) with return function, which is not supported in sqlite.
By the way, postgresql (and i believe some others full db servers) can do it.

Generate seed data in Visual Studio 2010 Database project

I have a database and a database project in Visual studio 2010. I inferred the schema in the database project successfully but I also need to import somehow the data in a few tables (Country, State, UserType etc.) that are reference tables and not really data tables.
Is there a way?
The only way I found so far is generating data scripts in SQL Server Management Studio and placing this script in the post-deployment script file in the database project.
Any simpler way?
Try the Static Data Script Generator for SQL Server. It automates those Post-Deployment scripts for you in the correct format. It is a free Google Code hosted project and we have found it useful for scripting our static data (also over the top of existing data for updates as well).
this is pretty much how I've done it before.
I would just make sure that each statement in you script look something like:
IF (EXISTS(SELECT * FROM Country WHERE CountryId = 1))
UPDATE MyTable SET Name = 'UK' WHERE CountryId = 1 AND Name != 'UK'
ELSE
INSERT INTO MyTable (CountryId, Name) VALUES (1, 'UK')
This means that each time you deploy the database your core reference data will be inserted or updated, whichever is most appropriate, and you can modify these scripts to create new reference data for newer versions of the database.
You could use a T4 template to generate these scripts - I've done something similar in the past.
you can also use the Merge INTO statement for updating/deleting/inserting seed data into the table in your post deployment script.I have tried and its works for me here is the simple example:
*/ print 'Inserting seed data for seedingTable'
MERGE INTO seedingTable AS Target
USING (VALUES (1, N'Pakistan', N'Babar Azam', N'Asia',N'1'),
(2, N'England', N'Nasir Hussain', N'Wales',N'2'),
(3, N'Newzeland', N'Stepn Flemming', N'Australia',N'4'),
(4, N'India', N'Virat Koli', N'Asia',N'3'),
(5, N'Bangladash', N'Saeed', N'Asia',N'8'),
(6, N'Srilanka', N'Sangakara', N'Asia',N'7') )
AS Source (Id, Cric_name,captain,region,[T20-Rank]) ON Target.Id = Source.Id
-- update matched rows
WHEN MATCHED THEN
UPDATE SET Cric_name = Source.Cric_name, Captain = Source.Captain, Region=source.Region, [T20-Rank]=source.[T20-Rank]
-- insert new rows
WHEN NOT MATCHED BY TARGET THEN
INSERT (Id, Cric_name,captain,region,[T20-Rank])
VALUES (Id, Cric_name,captain,region,[T20-Rank])
-- delete rows that are in the target but not the source
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
By Using postdeployment.script and its executed perfectly with new table and seed data into it below is the script,after that I want to add new column and insert data into it how i i can do this
insert into seedingTable (Id, Cric_name, captain, region,[T20-Rank])
select 1, N'Pakistan', N'Babar Azam', N'Asia',N'1'
where not exists
(select 1 from dbo.seedingTable where id=1)
go
insert into seedingTable (Id, Cric_name, captain, region,[T20-Rank]) select 2,
N'England', N'Nasir Hussain', N'Wales',N'3'
where not exists
(select 1 from dbo.seedingTable where id=2)
Let me know above script will run every time when deploying database by using azure pipeline. how to update data.

Resources