How to migrate Oracle View to Teradata - oracle

I am working on migration project of Oracle to Teradata.
The tables have been migrated using datastage jobs.
How do I migrate Oracle Views to Teradata?
Direct script copying is not working due to SQL statements difference of both databases
Please help?

The DECODE() Oracle function is available as part of the Oracle UDF Library on the Teradata Developer Exchange Downloads section. Otherwise, you are using the DECODE function in your example in the same manner in which the ANSI COALESCE() function behaves:
COALESCE(t.satisfaction, 'Not Evaluated')
It should be noted that the data types of the COALESCE() function must be implicitly compatible or you will receive an error. Therefore, t.satisfaction would need to be at least CHAR(13) or VARCHAR(13) in order for the COALESCE() to evaluate. If it is not, you can explicitly cast the operand(s).
COALESCE(CAST(t.satisfaction AS VARCHAR(13)), 'Not Evaluated')
If your use of DECODE() includes more evaluations than what is in your example I would suggest implementing the UDF or replacing it with a more standard evaluated CASE statement. That being said, with Teradata 14 (or 14.1) you will find that many of the Oracle functions that are missing from Teradata will be made available as standard functions to help ease the migration path from Oracle to Teradata.

Related

Difference between oracle function,procedures and PostgreSQL functions,Stored Procedures?

Can Any one please let me know Difference between oracle function,procedures and PostgreSQL functions,Stored Procedures?
That question is too broad, but I'll try to enumerate some of the major differences:
They are written in different languages. Oracle has PL/SQL and Java, while with PostgreSQL you can use almost any language you want.
PostgreSQL's PL/pgSQL is a clone of PL/SQL, but there are significant differences.
The syntax of the CREATE FUNCTION and CREATE PROCEDURE statements is quite different:
PostgreSQL has the code as a string literal, Oracle doesn't.
The syntax for declaring function results differs quite a bit.
Both systems have set returning functions (Orable calls them “pipelined”), but the syntax is different.
Oracle has a huge body of libraries in its data dictionary that makes upgrades a pain, but is very useful for writing functions. PostgreSQL has little of that, you typically write Perl or Python functions to interact with the system.
Support for procedures has only recently been added to PostgreSQL (v11), so they are not feature-complete yet.
You cannot have transaction management in PostgreSQL functions, and you cannot have what Oracle calls an “autonomous transaction”
You can work around some of these restrictions to some extent, but it is not the same.
Oracle functions are executed in the user context of the owner by default, while in PostgreSQL the default is to run them in the user context of the invoker.

Is static sql to be preferred over dynamic sql in postgresql stored procedures?

I am not sure in case of Stored Procedures, if Postgresql treats static sql any differently from a query submitted as a quoted string.
When I create a stored procedure in PostgreSQL using static sql, there seems to be no validation of the table names and table columns or column types but when I run the procedure I get the listing of the problems if any.
open ref_cursor_variable for
select usr_name from usres_master;
-- This is a typing mistake. The table name should be users_master. But the stored procedure is created and the error is thrown only when I run the procedure.
When I run the procedure I (naturally) get some error like :
table usres_master - invalid table name
The above is a trivial version. The real procedures we use at work combine several tables and run to at least a few hundred lines. In PostgresQL stored procedure, is there no advantage to using static sql over dynamic sql i.e. something like open ref_cursor_variable for EXECUTE select_query_string_variable.
The static SQL should be preferred almost time - dynamic SQL should be used only when it is necessary
from performance reasons (dynamic SQL doesn't reuse execution plans). One shot plan can be better some times (and necessary).
can reduce lot of code
In other cases uses static SQL every time. Benefits:
readability
reuse of execution plans
it is safe against SQL injection by default
static check is available
The source of a function is just a string to Postgres. The main reason for this is the fact that Postgres (unlike other DBMS) supports many, even installable languages for functions and procedures. As the Postgres core can't possibly know the syntax of all languages, it can not validate the "inner" part of a function. To my knowledge the "language API" does not contain any "validate" method (in theory this would probably be possible though).
If you want to statically validate your PL/pgSQL functions (and procedures since Postgres 11) you could use e.g. https://github.com/okbob/plpgsql_check/

Writing an isnull() wrapper for nvl()

We are moving our database from Oracle to SQL Server. My queries make extensive use of Oracle's nvl function. In SQL Server, the function to use is isnull(). If possible, I'd like to start getting my queries ready by changing them to use isnull(), while still on Oracle. My idea is to create a wrapper function isnull() in my schema and change my queries to use that function instead. That way when we switch database platforms, my queries are already using the new function.
Is there a way I can create a wrapper function in Oracle called isnull() that accepts and returns any datatype? Or do I just have to have multiple isnull() declarations, overloaded for all the expected data types?
Another approach might be to use COALESCE instead of NVL, since the syntax for COALESCE is the same in both Oracle and SQL Server. Still, the goal (if it is your goal) of having identical SQL that works efficiently (or even works at all) in both Oracle and SQL Server may not be realistic.
The only way in PL/SQL to have multiple overloads for the same function would be to create them in a package. You can create a package that includes a number of different overloaded IsNull functions that accept and return different data types and use those in your queries. Of course, that does mean that you will have to include the package name in your code. It's potentially easy enough to remove the package name when you move to SQL Server but it won't be an exact migration.

Sybase Features

Does anybody know if Common Table Expressions and User-defined functions (not from Java) are supported on Sybase 12.5? I'm trying but could not seem to make these work. Thanks guys.
Both are not supported by ASE 12.5
You can use stored procedure instead of functions. I am not sure about what you are referring to by saying common table expressions
Sorry I have to disagree. Microsoft SQL Server is based on Sybase 7.0. So there may not be Common Table Expressions and User-Defined Functions, there are equivalent ways to do the same thing.
For example CTE can be done either in nested queries or via temp tables using a number sign (#) in front of the table name.
For User Defined Functions, create a stored procedure with simple SQL code and call it via the exec function for example "exec my_sql_code". This allows nesting of stored procedures.
Good SQL, good night.

Oracle sql types over dblink

I have two schemas: A and B (Oracle 9). At the A there is a dblink to B. At the B there is a package, that i calls from A. Procedures in B package can returns varying count results and i think that returning a collection is a better way for this reason.
create type B.tr_rad as object (
name varchar2(64)
,code number
,vendor number
,val varchar2(255)
,num number
);
create type B.tt_rad as varray(256) of B.tr_rad;
But from A scheme I cannot use tt_rad type because using SQL-types by dblink is not supported. DBMS_SQL is not supported cursors. Create types with same OID is impossible.
I think to use temporary tables. But firstly it is not that good (after the remote function returns the value, calling side must select collection from remote table). And there are fears of a slowdown of work with temporary tables.
Maybe who knows the alternative interaction?
I've had similar problems in the past. Then I came to the conclusion that fundamentally Oracle's db links are "broken" for anything but simple SQL types (especially UDT's, CLOBS may have problems, XMLType may as well). If you can get the OID solution working then good luck to you.
The solution I resorted to was to use a Java Stored procedure, instead of the DB Link.
Characteristics of the Java Stored Procedure:
Can return a "rich set of types", just about all of the complex types (UDT's, tables/arrays/varrays) see Oracle online documentation for details. Oracle does a much better job of marshalling complex (or rich) types from java, than from a DBLink.
Stored Java can acquire the "default connection" (runs in the same session as the SQL connection to the db - no authentication issues).
Stored Java calls the PL/SQL proc on the remote DB, and the java JDBC layer does the marshaling from the remote DB.
Stored Java packages up the result and returns the results to the SQL or PL/SQL layer.
It's a bit of work, but if you have a bit of java, you should be able to "cut and paste" a solution together from the Oracle documentation and sample.
I hope this helps.
See this existing discussion
referencing oracle user defined types over dblink
An alternative interaction is to have one database with schemas A and B instead of two databases with a database link.
My solution.
On the side B i create temporary table like the collection record. At the A side i have a DBMS_SQL wrapper that calls procedure over dblink. This procedure writes result collection in the temporary table. After successful completion remote procedure i select results from remote temporary table and transform it to local collection type.
Limitations
1. the need for permanent object synchronization.
2. impossibility use A-side procedure (that call remote procedure) in SQL query.
3. the complexity of using.

Resources