I have a Visual Studio 2013 database project, in which I would like to change all GUID columns with a default of newid() to newsequentialid().
Using the following query to identify the columns:
SELECT so.name AS table_name,
sc.name AS column_name,
sm.text AS default_value
FROM sys.sysobjects so
JOIN sys.syscolumns sc ON sc.id = so.id
LEFT JOIN sys.syscomments SM ON sm.id = sc.cdefault
WHERE so.xtype = 'U'
AND sm.text = '(newid())'
ORDER BY so.[name], sc.colid
there are a total of 62 columns in 62 tables.
Is there a way to do this in Visual Studio other than going to each table definition one-by-one and changing the default value?
Find and replace in all *.sql files in the project would work to do that in the project. I'd try the find/replace first and then maybe see if you need to tweak the refactorlog in some fashion to handle refactoring.
Related
I'm looking for a way to generate a script that would generate an SQL query that would select all child tables columns from a parent table.
Let's say you have a table Class (teacher, room, program) and a table Student (firstname, lastname, age, score, email).
Let's say you want to get a select of all students in Class.
Sure you could write the query manually.
But now imagine you have a complex table with dozens of child tables, how do you do this efficiently/programmatically ?
This is something that all programmers would like to have, no ?
I can't believe no one has ever done that.
I understand the answer may depend on the DBMS vendor, I'm personally looking for a solution for Oracle.
Questions that are a bit similar :
Oracle: Easy way to find names and/or number of child record tables
Postgres: select data from parent table and all child tables
And here is an idea to solve this partially : use a tool such as PowerBi or Visual Studio to generate Model from database in ASP.NET MVC. You won't get the SQL query but you will get the data.
You can start with this POC:
select
juc.table_name as parent_table,
/*
uc.table_name as child_table, uc.constraint_name, uc.r_constraint_name,
juc.constraint_type,
uccc.column_name as parent_col_name, uccc.position as parent_col_position,
uccp.column_name as child_col_name, uccp.position as child_col_position,
*/
'SELECT c.* FROM ' || juc.table_name || ' p JOIN ' || uc.table_name || ' c ON '
||
LISTAGG( 'c.' || uccp.column_name || ' = p.' || uccc.column_name, ' AND ' ) WITHIN GROUP(order by uccc.position)
as sql
from user_constraints uc
join user_constraints juc on juc.constraint_name = uc.r_constraint_name
join user_cons_columns uccc on uccc.constraint_name = uc.r_constraint_name
join user_cons_columns uccp on uccp.constraint_name = uc.constraint_name and uccc.position = uccp.position
where uc.constraint_type = 'R'
group by uc.table_name, juc.table_name, uc.constraint_name
;
You can create your own entity relationship model metadata and write PL/SQL that will traverse it and assemble SQL intelligently. I've done this myself to avoid having to hard-code SQL in my front-end apps. But it is highly complex and involves a lot of coding, far more than can be shared in a forum like this. But to give you the general gist, I have the following metadata tables that describe my model:
sql_statements - associates a logical entity with a primary table, and specifies the PK column.
sql_statement_parents - defines the parent entity and the child attribute used to join to the parent's PK.
sql_attribute_dictionary - lists every available attribute for every statement, the source column, its datatype, plus optional derived column expressions.
attribute_dependencies - used for derived column expressions, specifies which attributes are needed by the derived attribute.
Then you write code that takes a sql_statement name and a list of desired attributes and a set of optional filters, and it builds a list of needed source tables/columns using the data relationships in the metadata, and then using the parent-child relationships recursively builds SQL (using nested query blocks) from the child to whatever parent ancestor(s) it needs to obtain the required columns, intelligently aliasing everything and joining in the write way to be performant. It can then pass back the finished SQL as a REF CURSOR which you can then parse, open and fetch from to get results. It works great for me, but it did take weeks of work to perfect, and that's with decades of experience in SQL and PL/SQL. This is no simple task, but it is doable. And of course there are always complex needs that defy the capabilities of our metadata model, and so for those we end up either creating views or pipeline functions, and registering those in our metadata so that generated SQL can invoke them when needed.
But in the end, however you do it, you will not get away from having to describe your data model in detail so that code can walk it.
I do voluntary work at an animal shelter. We have an application which uses a SQL Server 2019 database. I have created a view that includes a varbinary(max) column. The value in this column is a picture, stored in hexadecimal-format. I would like to convert this Hex-value to a base64-binary file and add these to the view as an extra column.
I found the perfect solution for my situation in SQL Server : hex to base64. The example provided converts 1 single hex-value into 1 base64-value. I now need to add this solution to my view, but I'm not having any success.
The offered solution:
DECLARE #TestBinHex varchar(max), #TestBinary varbinary(max), #Statement nvarchar(max);
SELECT #TestBinHex = '0x012345';
SELECT #Statement = N'SELECT #binaryResult = ' + #TestBinHex;
EXECUTE sp_executesql #Statement, N'#binaryResult varbinary(max) OUTPUT', #binaryResult=#TestBinary OUTPUT;
SELECT
CAST(N'' AS XML).value(
'xs:base64Binary(xs:hexBinary(sql:column("bin")))'
, 'VARCHAR(MAX)'
) Base64Encoding
FROM
(SELECT #TestBinary AS bin) AS bin_sql_server_temp;
A simplified version of my view:
SELECT
a.cat_id, a.catname, s.cat_id,
s.stay_id, s.shelter_handler, s.shelter_kennel, s.picture
FROM
dbo.animal AS a
OUTER APPLY
(SELECT TOP 1 *
FROM dbo.shelterdata
WHERE a.cat_id = s.cat_id
ORDER BY s.stay_id DESC) AS S
WHERE
(a.cat_id IS NOT NULL) AND (s.leave_date IS NULL)
The view shows an overview of all cats currently present in the shelter (leave_date is NULL). The reason for the TOP 1 is that sometimes shelter animals get returned, and the application then assigns a new stay_id. To prevent duplicate values from the join, I only return the value of the most recent stay_id.
What I am trying to achieve: the second table (dbo.shelterdata) includes the picture, stored in hex value. I'd like to add a column Base64Encoding to the view which includes the converted value.
My attempts
I was successful in replacing the static value '0x012345' by a SELECT statement. But the way the solution is formatted, it only allows for one input value. So I had to restrict it with a WHERE clause. It is obvious to me that I need to make a subquery which inputs the hex value based on the unique cat_id. However, it has been many years since I worked with variable, so I'm struggling with the formatting of the statement.
My request
Does anyone have a suggestion how to build the conversion into the view?
Any assistance would be greatly appreciated.
After searching for a few more hours, I stumbled onto the solution. Maybe it will help someone else in the future. The solution is remarkably simple, as is often the case.
My view, mentioned above, is called dbo.shelter_view
select sv.picture,sv.cat_id,
cast('' as xml).value(
'xs:base64Binary(sql:column("sv.picture"))', 'varchar(max)'
) as Base64Encoding
from dbo.shelter_view as SV
We are using SSDT to create a build and deploy pipeline using Git and ADO. In order for the solution to build, all cross-database references must have a corresponding Database Reference in the project and must be referred to using a [$(DatabaseVariable)]. This means that every time a change is made and synched using Schema Compare, the database names have to be replaced either manually or with a find-and-replace. There are of course numerous downsides to the find-and-replace approach, including the fact that it will find and replace references in the project files that should not be database variables. I'm hoping someone knows of a way to automate this process that doesn't involve such a brute force method as find-and-replace.
I have searched around extensively for this and found nothing helpful.
Here's an example view that contains cross-database references:
CREATE view [Migration].[vwCHILDS_Allegation_Allegation]
as
with src as (
select
cast('' as nvarchar(50)) CEAllegationIdentifier
, (select AllegationTypeId from [$(CWNS_Migration)].Allegation.AllegationType where Code = dfrvmi1.DestinationDataFieldReferenceValueCode) AllegationTypeId
, cast(1 as int) SourceSystemId
, cast(src.IDNBR as nvarchar(64)) SourceSystemIdentifier
, src.IDNBR SourceSystemIdentifier_Numeric
, case when src.CRET_DT_TM = '0001-01-01' then null else src.CRET_DT_TM end SourceSystemCreatedDateTime
, case when src.MOD_DT_TM = '0001-01-01' then null else src.MOD_DT_TM end SourceSystemModifiedDateTime
, (
select
max(pe.PersonId)
from
[$(CWNS_Migration)].PersonIdentity.PersonIdentifier pe
join [$(CHILDSDB2)].VLCHA.STAFF_USERID st on cast(st.FK_STAFFFK_PERSID as nvarchar(64)) = pe.Identifier
and pe.PersonIdentificationSystemId = 4
where
st.USERID = ltrim(rtrim(src.MOD_USR_ID))) SourceSystemModifiedPersonId
from
[$(CHILDSDB2)].VLCHA.ALGTN src
left join [$(DataCatalog)].dbo.DataFieldReferenceValueMappingInfo dfrvmi1 on dfrvmi1.SourceDataFieldReferenceValueDataFieldId = 216
and dfrvmi1.SourceDataFieldReferenceValueCode = ltrim(rtrim(src.FK_ALGTN_PRIORICTG))
and dfrvmi1.DestinationDataFieldReferenceValueDataFieldId = 20605
)
select
src.*
from
src
left join [$(CWNS_Migration)].Allegation.Allegation tgt on tgt.SourceSystemId = src.SourceSystemId and tgt.SourceSystemIdentifier = src.SourceSystemIdentifier
left join [$(CWNS_Migration)].Quarantine.Allegation q on q.SourceSystemId = src.SourceSystemId and q.SourceSystemIdentifier = src.SourceSystemIdentifier
and q.QExecutionId = 1
where
q.QExecutionId is null
and (
isnull(src.AllegationTypeId, 0) <> isnull(tgt.AllegationTypeId, 0)
or isnull(try_cast(src.SourceSystemCreatedDateTime as datetime), '') <> isnull(tgt.SourceSystemCreatedDateTime, '')
or isnull(try_cast(src.SourceSystemModifiedDateTime as datetime), '') <> isnull(tgt.SourceSystemModifiedDateTime, '')
or isnull(src.SourceSystemModifiedPersonId, 0) <> isnull(tgt.SourceSystemModifiedPersonId, 0)
)
I'm also hoping there's a way to avoid having Schema Compare always show the variables as differences with the database.
Ideally you need to include all databases as the reference to your project (right click on the project and do Add database reference, or something like that). You can do that in 2 ways:
* Create project for each database and import all or only objects that being used by your database
* Extract dacpac from the live database and use it as a reference
Then create synonyms for every single cross database objects in your main database. For example if we have object [$(CWNS_Migration)].PersonIdentity.PersonIdentifier then you'll need to create synonym as
CREATE SYNONYM PersonIdentity.PersonIdentifier
FOR [$(CWNS_Migration)].PersonIdentity.PersonIdentifier;
and then use PersonIdentity.PersonIdentifier (2 part name) in your code instead of the 3-4 part names. That's the project setup.
Now, when you right click on the project and do "Publish", it will pop up the dialogue where you can specify the vale for the CWNS_Migration variable and change it to whatever you need. After that you have the option to save this settings (Save As) and the result of that save is called as "Publish Profile". So, for every environment you just need to create different publish profile and use it when publish your changes.
You might want to try to start with synonyms first, it might work (need to verify), but I would recommend to add database references anyway.
I need to work with a SQL result set in order to do some processing for each column (medians, standard deviations, several control statements included)
The SQL is dynamic so I don't know the number of columns, rows.
First I tried to use temporary tables, views, etc to store the results, however I did not manage to overcome the 30 character limit of Oracle columns when using the below sql:
create table (or view or global temporary table) as select * from (
SELECT
DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE,
SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ +DMTTBF_MAT_MATURATO_BILL_POS. MAT_N_NUM_EVENTI) <-- exceeds the 30 character limit
FROM DMTTBF_MAT_MATURATO_BILL_POS
WHERE DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE >= '201301'
GROUP BY DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE
)
Second choice was to use some PL/SQL types to store the entire table information, so I could call it like in other programming languages (e.g. a matrix result[i][j]) but I could not find anything similar.
Third variant, using files for reading and writing: i did not try it yet; i'm still expecting a more elegant pl/sql solution
It's possible that I have the wrong approach here so any advice is more than welcome.
UPDATE: Modifying the input SQL is not an option. The program has to accept any select statement.
Note that you can alias both tables and fields. Using a table alias keeps references to it from producing walls of text in the query. Using one for a field gives it a new name in the output.
SELECT A.LONG_FIELD_NAME_HERE AS SHORTNAME
FROM REALLY_LONG_TABLE_NAME_HERE A
The auto naming adds _1 and _2 etc to differentiate the same column name coming from different table references. This often puts a field already borderline over the limit. Giving the fields names yourself bypasses this.
You can put the alias also in dynamic SQL:
sqlstr := 'create table (or view or global temporary table) as select * from (
SELECT
DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE,
SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ + DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI) AS '||SUBSTR('SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ +DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI)', 1, 30)
||' FROM DMTTBF_MAT_MATURATO_BILL_POS
WHERE DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE >= ''201301''
GROUP BY DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE
)'
I have a database and a database project in Visual studio 2010. I inferred the schema in the database project successfully but I also need to import somehow the data in a few tables (Country, State, UserType etc.) that are reference tables and not really data tables.
Is there a way?
The only way I found so far is generating data scripts in SQL Server Management Studio and placing this script in the post-deployment script file in the database project.
Any simpler way?
Try the Static Data Script Generator for SQL Server. It automates those Post-Deployment scripts for you in the correct format. It is a free Google Code hosted project and we have found it useful for scripting our static data (also over the top of existing data for updates as well).
this is pretty much how I've done it before.
I would just make sure that each statement in you script look something like:
IF (EXISTS(SELECT * FROM Country WHERE CountryId = 1))
UPDATE MyTable SET Name = 'UK' WHERE CountryId = 1 AND Name != 'UK'
ELSE
INSERT INTO MyTable (CountryId, Name) VALUES (1, 'UK')
This means that each time you deploy the database your core reference data will be inserted or updated, whichever is most appropriate, and you can modify these scripts to create new reference data for newer versions of the database.
You could use a T4 template to generate these scripts - I've done something similar in the past.
you can also use the Merge INTO statement for updating/deleting/inserting seed data into the table in your post deployment script.I have tried and its works for me here is the simple example:
*/ print 'Inserting seed data for seedingTable'
MERGE INTO seedingTable AS Target
USING (VALUES (1, N'Pakistan', N'Babar Azam', N'Asia',N'1'),
(2, N'England', N'Nasir Hussain', N'Wales',N'2'),
(3, N'Newzeland', N'Stepn Flemming', N'Australia',N'4'),
(4, N'India', N'Virat Koli', N'Asia',N'3'),
(5, N'Bangladash', N'Saeed', N'Asia',N'8'),
(6, N'Srilanka', N'Sangakara', N'Asia',N'7') )
AS Source (Id, Cric_name,captain,region,[T20-Rank]) ON Target.Id = Source.Id
-- update matched rows
WHEN MATCHED THEN
UPDATE SET Cric_name = Source.Cric_name, Captain = Source.Captain, Region=source.Region, [T20-Rank]=source.[T20-Rank]
-- insert new rows
WHEN NOT MATCHED BY TARGET THEN
INSERT (Id, Cric_name,captain,region,[T20-Rank])
VALUES (Id, Cric_name,captain,region,[T20-Rank])
-- delete rows that are in the target but not the source
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
By Using postdeployment.script and its executed perfectly with new table and seed data into it below is the script,after that I want to add new column and insert data into it how i i can do this
insert into seedingTable (Id, Cric_name, captain, region,[T20-Rank])
select 1, N'Pakistan', N'Babar Azam', N'Asia',N'1'
where not exists
(select 1 from dbo.seedingTable where id=1)
go
insert into seedingTable (Id, Cric_name, captain, region,[T20-Rank]) select 2,
N'England', N'Nasir Hussain', N'Wales',N'3'
where not exists
(select 1 from dbo.seedingTable where id=2)
Let me know above script will run every time when deploying database by using azure pipeline. how to update data.