Can I use partial Database Projects in the following scenario?
I have numerous databases which are all inter-dependent on each other. Database2 executes SPs on Database1 and Database3 and Database3 executes SPs on Database2.
For example:
Database 1 could be defined as:
CREATE TABLE [dbo].[X]
(
Id int NOT NULL,
X int NULL
)
CREATE PROCEDURE [dbo].[GetX]
AS
BEGIN
SELECT * FROM [dbo].[X]
END
Database 2 could be defined as:
CREATE PROCEDURE [dbo].[GetX]
AS
BEGIN
EXEC [Database1].[dbo].[GetX]
END
CREATE PROCEDURE [dbo].[GetXY]
AS
BEGIN
EXEC [dbo].[GetX]
EXEC [Database3].[dbo].[GetY]
END
Database 3 could be defined as:
CREATE TABLE [dbo].[Y]
(
Id int NOT NULL,
Y int NULL
)
CREATE PROCEDURE [dbo].[GetX]
AS
BEGIN
EXEC [Database2].[dbo].[GetX]
END
CREATE PROCEDURE [dbo].[GetY]
AS
BEGIN
SELECT * FROM [dbo].[Y]
END
Because Database2 is dependent on Database3 and Database3 on Database2 there is a circular reference. I'm not sure how partial projects would help me here because if I separated Database2 between base and external objects, I would still have a circular reference on the external project. So as I understand it, this is not allowed.
Apologies for the explanation, the databases are hurting my head too. I will try to be more verbose if required.
This has effecivley been answered by http://social.msdn.microsoft.com/Forums/en/vstsdb/thread/8357d599-9d03-4ed6-879f-e090f9c96fcd
In short, for simple scenarios such as the example, this will work but for level of complexity the management required will prevent the of use of Database Projects.
We still plan generate the required object files using VS in order to store in TFS. Unfortunatley we will be unable to build deployment scripts.
Related
I see the concept of temporary table in oracle is quite different from other databases like SQL Server. In Oracle, we have a concept of global temporary table and we create it only once and in each session we fill it with data which is not the same in other databases.
In 18c, oracle has introduced the concept of private temporary tables which states that upon successful usage, tables can be dropped like in other databases. But how do we use it in a PL/SQL block?
I tried using it using dynamic SQL - EXECUTE IMMEDIATE. But it is giving me table must be declared error. what do I do here?
But how do we use it in a PL/SQL block?
If what you mean is, how can we use private temporary tables in a PL/SQL program (procedure or function) the answer is simple: we can't. PL/SQL programs need to be compiled before we can call them. This means any table referenced in the program must exist at compilation time. Private temporary tables don't change that.
The private temporary table is intended for use in ad hoc SQL work. It allows us to create a data structure we can use in SQL statements for the duration of a session, to make life easier for ourselves.
For instance, suppose I have a massive table of sales data - low level transactions - and my task is to investigate monthly trends. So I only need the total sales by month. Unfortunately, there is no materialized view providing this summary. I don't want to include the aggregating query in my select statements. In previous versions I would have had to create a permanent table (and had to remember to drop it afterwards) but in 18c I can use a private temporary table to stage my summary just for the session.
create private temporary table ora$ptt_sales_summary (
sales_month date
, total_value number )
/
insert into ora$ptt_sales_summary
select trunc(sales_date, 'MM')
, sum (qty*price)
from massive_sales_table
group by trunc(sales_date, 'MM')
/
select *
from ora$ptt_sales_summary
order by sales_month
/
Obviously we can write anonymous PL/SQL blocks in our session but let's continue assuming that's not what you need. So what is the equivalent of a private temporary table in a permanent PL/SQL program? Same as it's been for several versions now: a PL/SQL collection or a SQL nested table type.
Private temporary tables (Available from Oracle 18c ) are dropped at the end of the session/transaction depending on the definition of PTT.
The ON COMMIT DROP DEFINITION option creates a private temporary table that is transaction-specific. At the end of the transaction,
Oracle drops both table definitions and data.
The ON COMMIT PRESERVE DEFINITION option creates a private temporary table that is session-specific. Oracle removes all data and
drops the table at the end of the session.
You do not need to drop it manually. Oracle will do it for you.
CREATE PRIVATE TEMPORARY TABLE ora$ptt_temp_table (
......
)
ON COMMIT DROP DEFINITION;
-- or
-- ON COMMIT PRESERVE DEFINITION;
Example of ON COMMIT DROP DEFINITION (table is dropped after COMMIT is executed)
Example of ON COMMIT PRESERVE DEFINITION (table is retained after COMMIT is executed but it will be dropped at the end of the session)
Note: I don't have access to 18c DB currently and db<>fiddle is facing some issue so I have posted images for you.
Cheers!!
It works with dynamic SQL:
declare
cnt int;
begin
execute immediate 'create private temporary table ora$ptt_tmp (id int)';
execute immediate 'insert into ora$ptt_tmp values (55)';
execute immediate 'insert into ora$ptt_tmp values (66)';
execute immediate 'insert into ora$ptt_tmp values (77)';
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
execute immediate 'delete from ora$ptt_tmp where id = 66';
cnt := 0;
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
end;
Example here:
https://livesql.oracle.com/apex/livesql/s/l7lrzxpulhtj3hfea0wml09yg
I've been tasked with improving old PL/SQL and Oracle SQL legacy code. In all there are around 7000 lines of code! One aspect of the existing code that really surprises me is the previous coder needlessly created hundreds of lines of code by not writing any procedures or functions - instead the coder essentially repeats the same code throughout.
For example, in the existing code there are literally 40 or more repetitions of the following SQL:
CREATE TABLE tmp_clients
AS
SELECT * FROM live.clients;
CREATE TABLE tmp_customers
AS
SELECT * FROM live.customers;
CREATE TABLE tmp_suppliers
AS
SELECT * FROM live.suppliers WHERE type_id = 1;
and many, many more.....
I'm very new to writing in PL/SQL, though I have recently purchased the excellent book "Oracle PL/SQL programming" by Steven Feuerstein. However, as far as I can tell, I should be able to write a callable procedure such as:
procedure create_temp_table (new_table_nme in varchar(60)
source_table in varchar(60))
IS
s_query varchar2(100);
BEGIN
s_query := 'CREATE TABLE ' + new_table_nme + 'AS SELECT * FROM ' + source_table;
execute immediate s_query;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -955 THEN
NULL;
ELSE
RAISE;
END IF;
END;
I would then simply call the procedure as follows:
create_temp_table('tmp.clients', 'live.clients');
create_temp_table('tmp.customers', 'live.customers');
Is my proposed approach reasonable given the problem as stated?
Are the datatypes in the procedure call reasonable, ie should varchar2(60) be used, or is it possible to force the 'source_table' parameter to be a table name in the schema? What happens if the table name is more than 60 characters?
I want to be able to pass a third non-required parameter in cases where the data has to be restricted in a trivial way, ie to deal with cases "WHERE type_id = 1". How do I modify the procedure to include a parameter that is only used occasionally and how would I modify the rest of the code. I would probably add some sort of IF/ELSE statement to check whether the third parameter was not NULL and then construct the s_query accordingly.
How would I check that the table has actually been created successfully?
I want to trap for two other exceptions, namely
The new table (eg 'tmp.clients') already exists; and
The source table doesn't exist.
Does the EXCEPTION as written handle these cases?
More generally, from where can I obtain the SQL error codes and their meanings?
Any suggested improvements to the code would be gratefully received.
You could get rid of a lot of code (gradually!) by using GLOBAL temporary tables.
Execute immediate is not a bad practice but if there are other options then they should be used. Global temp tables are common where you want to extract and transform data but once processed you don't need it anymore until the next load. Each user can only see the data they insert and no redo logs are generated. You can index the data for faster querying if required.
Something like this
-- Create table
create global temporary table GT_CLIENTS
(
id NUMBER(10) not null,
Client_id NUMBER(10) not null,
modified_by_id NUMBER(10),
transaction_id NUMBER(10),
local_transaction_id VARCHAR2(30) not null,
last_modified_date_tz TIMESTAMP(6) WITH TIME ZONE not null
)
on commit preserve rows;
I recommend the on commit preserve rows option so that you can debug your procedure and see what went into the table.
Usage would be
INSERT INTO GT_CLIENTS
SELECT * FROM live.clients;
If this is the route you want to take to minimize changes, then the error for source table does not exist is -942 which you will want to stop for rather than continuing as your temp table would not have been created. Similarly, just continuing if you get an object already exists error will be problematic as you will not have reloaded it with the new data - the create failed so the table still has the data from the last run. So I would definitely do some more thinking about your exception handler.
That said, I also concur that this is generally not the best way to do things. Creating and dropping objects in a multi-user environment is a disaster in the making, and seems a silly waste of resources when there are more appropriate options available.
I have read and understand that Oracle uses only global temp tables unlike MS SQL which allows #temp tables. The situation that I have would call for me to create hundreds of Global temp tables in order to complete the DB conversion I am working on from MS SQL to Oracle. I want to know if there is another method out there, within a Oracle Stored Procedure, other than creating all of these tables which will have to be maintained in the DB.
Thank You
" Most of the time the only thing the temp tables are used within a
stored proc and then truncated at the end. We do constant upgrades to
our applications and having them somewhat comparable ensures that when
a change is made in one version that it can be easily merged to the
other."
T-SQL Temp tables are essentially memory structures. They provide benefits in MSSQL which are less obvious in Oracle, because of differences in the two RDBMS architectures. So if you were looking to migrate then you would be well advised to take an approach more fitted to Oracle.
However, you have a different situation, and obviously keeping the two code bases in sync will make your life easier.
The closest thing to temporary tables as you want to use them are PL/SQL collections; specifically, nested tables.
There are a couple of ways of declaring these. The first is to use a SQL template - a cursor - and define a nested table type based on it. The second is to declare a record type and then define a nested table on that. In either case, populate the collection variable with a bulk operation.
declare
-- approach #1 - use a cursor
cursor c1 is
select *
from t23;
type nt1 is table of c1%rowtype;
recs1 nt1;
-- approach #1a - use a cursor with an explicit projection
cursor c1a is
select id, col_d, col_2
from t23;
type nt1a is table of c1a%rowtype;
recs1 nt1a;
-- approach #2 - use a PL/SQL record
type r2 is record (
my_id number
, some_date date
, a_string varchar2(30)
);
type nt2 is table of r2;
recs2 nt2;
begin
select *
bulk collect into recs1
from t23;
select id, col_d, col_2
bulk collect into recs2
from t23;
end;
/
Using a cursor offers the advantage of automatically reflecting changes in the underlying table(s). Although the RECORD provides the advantage of stability in the face of changes in the underlying table(s). It just depends what you want :)
There's a whole chapter in the PL/SQL reference manual. Read it to find out more.
I'm working on adding a database project in VS2010. I created a SQL 2008 DB Project, pointed at the development server, and it seems to have generated all of the appropriate schema objects. However, all of the CREATE TRIGGER scripts have the following error:
SQL03120: Cannot find element referenced by the supporting statement
Googling that error message doesn't return much, and seems to point to scripts using ALTER instead of CREATE which isn't the case here. This is an example of one of the scripts:
CREATE TRIGGER [TR_t_TABLE_TRIGGERNAME] ON [content].[t_TABLE]
FOR INSERT
AS
BEGIN
IF ( SELECT COUNT(*) FROM inserted) > 0
BEGIN
DECLARE #columnBits VARBINARY(50)
SELECT #columnBits = COLUMNS_UPDATED() | CAST (0 AS BIGINT)
INSERT INTO [history].[t_TABLE]
(
....
)
SELECT
....
FROM inserted
END
END
GO
EXECUTE sp_settriggerorder #triggername = N'[Content].[TR_t_TABLE_TRIGGER]', #order = N'last', #stmttype = N'insert';
The line that Visual Studio is attributing the error to is the last line executing the system proc. What stands out to me is that none of the object exist in the dbo schema. The table is in the Content schema and it has a matching table in the History schema. It would seem like the [Content] and [History] qualifiers would be resolvable though. Can't figure this one out...
Since this post comes up when you search for the above error on Google: another reason for this error is if you've got a stored procedure in a database project which specifies ALTER PROCEDURE rather than CREATE PROCEDURE.
Does anyone know if it is possible to use the Visual Studio / SQL Server Management Studio debugger to inspect the contents of a Table Value Parameter passed to a stored procedure?
To give a trivial example:
CREATE TYPE [dbo].[ControllerId] AS TABLE(
[id] [nvarchar](max) NOT NULL
)
GO
CREATE PROCEDURE [dbo].[test]
#controllerData [dbo].[ControllerId] READONLY
AS
BEGIN
SELECT COUNT(*) FROM #controllerData;
END
DECLARE #SampleData as [dbo].[ControllerId];
INSERT INTO #SampleData ([id]) VALUES ('test'), ('test2');
exec [dbo].[test] #SampleData;
Using the above with a break point on the exec statement, I am able to step into the stored procedure without any trouble. The debugger shows that the #controllerData local has a value of '(table)' but I have not found any tool that would allow me to actual view the rows that make up that table.
Since you get no joy from the debugger, here is my suggestion. You add an input varaiable to determine if it is in test mode or not. Then if it is in testmode, run the select at the top of the sp to see what the data is.
CREATE TYPE [dbo].[ControllerId] AS TABLE(
[id] [nvarchar](max) NOT NULL
)
GO
CREATE PROCEDURE [dbo].[jjtest]
(#controllerData [dbo].[ControllerId] READONLY
, #test bit = null)
AS
IF #test = 1
BEGIN
SELECT * FROM #controllerData
END
BEGIN
SELECT COUNT(*) FROM #controllerData;
END
GO
DECLARE #SampleData as [dbo].[ControllerId];
INSERT INTO #SampleData ([id]) VALUES ('test'), ('test2');
EXEC [dbo].[jjtest] #SampleData, 1;
I got no success trying to do the same what you described. So I guess it isn't possible yet. Will wait for SSMS 2010
It is not possible for table variables, but I built a procedure which will display the content of a temp table from another database connection. (which is not possible with normal queries).
Note that it uses DBCC PAGE & the default trace to access the data so only use it for debugging purposes.
You can use it by putting a breakpoint in your code, opening a second connection and calling:
exec sp_select 'tempdb..#mytable'
There is a solution I think you can create another stored procedure and fill the table value parameter and the call your main procedure and then you start debugging from the test procedure you made.