Source code for specific stored procedure or function - oracle

I can use all_arguments and all_procedures to list the procedures and functions inside any given package and with DBMS_METADATA I can extract the DDL for that package. Is there an easy way (other than lots of instring and substring calls) to obtain the procedure or function source code separately for each separate block of code in a package.
Something like this:
Owner | Package Name | Object Name | Overload | Arguments | Source
Obviously using substring and instring will present issues with overloaded functions.
All_arguments has the subprogram_id field which according to the very sparse documentation on it looks like it does uniquely reference which procedure it related to in the package but there doesn't appear to be anything that uses it.
Cheers in advance

IIRC, PLSQL allows nested packages and functions. In this case, you'll find that "instring" and "substring" may not be adequate to extract the source code, as you're facing recursion, and string functions typically only handle a smaller class of computations (typically regular expressions). This is a classic problem people have trying to parse languages with simple string manipulation. You can get around limits of string functions by essentially hacking to produce a poor man's parser but this can be a surprising amount of work if you want it to be deadly right, because you have to handle at least the recursive grammar rules that matter for your extraction.
Another way to get reliable access to the elements of a PLSQL package is to use a language parser. The DMS Software Reengineering Toolkit has a full PLSQL parser.
You'd have to extract the package text to a file first, and then apply the PLSQL parser to it; that produces an abstract syntax tree (AST) internally in the parser. Given the name of a function, it is rather easy to search the AST for the function with a matching name. You'd end up with more than one hit if you have overloaded functions; you might qualify the function by the hierarchy in which it is embedded or the information about the arguments that you might have. Having identified a specific function in the AST, one can ask DMS to pretty-print that tree, and it will regenerate the text of (complete with comments) for that function.

Related

Is there a difference between fun(n::Integer) and fun(n::T) where T<:Integer in performance/code generation?

In Julia, I most often see code written like fun(n::T) where T<:Integer, when the function works for all subtypes of Integer. But sometimes, I also see fun(n::Integer), which some guides claim is equivalent to the above, whereas others say it's less efficient because Julia doesn't specialize on the specific subtype unless the subtype T is explicitly referred to.
The latter form is obviously more convenient, and I'd like to be able to use that if possible, but are the two forms equivalent? If not, what are the practicaly differences between them?
Yes Bogumił Kamiński is correct in his comment: f(n::T) where T<:Integer and f(n::Integer) will behave exactly the same, with the exception the the former method will have the name T already defined in its body. Of course, in the latter case you can just explicitly assign T = typeof(n) and it'll be computed at compile time.
There are a few other cases where using a TypeVar like this is crucially important, though, and it's probably worth calling them out:
f(::Array{T}) where T<:Integer is indeed very different from f(::Array{Integer}). This is the common parametric invariance gotcha (docs and another SO question about it).
f(::Type) will generate just one specialization for all DataTypes. Because types are so important to Julia, the Type type itself is special and allows parameterization like Type{Integer} to allow you to specify just the Integer type. You can use f(::Type{T}) where T<:Integer to require Julia to specialize on the exact type of Type it gets as an argument, allowing Integer or any subtypes thereof.
Both definitions are equivalent. Normally you will use fun(n::Integer) form and apply fun(n::T) where T<:Integer only if you need to use specific type T directly in your code. For example consider the following definitions from Base (all following definitions are also from Base) where it has a natural use:
zero(::Type{T}) where {T<:Number} = convert(T,0)
or
(+)(x::T, y::T) where {T<:BitInteger} = add_int(x, y)
And even if you need type information in many cases it is enough to use typeof function. Again an example definition is:
oftype(x, y) = convert(typeof(x), y)
Even if you are using a parametric type you can often avoid using where clause (which is a bit verbose) like in:
median(r::AbstractRange{<:Real}) = mean(r)
because you do not care about the actual value of the parameter in the body of the function.
Now - if you are Julia user like me - the question is how to convince yourself that this works as expected. There are the following methods:
you can check that one definition overwrites the other in methods table (i.e. after evaluating both definitions only one method is present for this function);
you can check code generated by both functions using #code_typed, #code_warntype, #code_llvm or #code_native etc. and find out that it is the same
finally you can benchmark the code for performance using BenchmarkTools
A nice plot explaining what Julia does with your code is here http://slides.com/valentinchuravy/julia-parallelism#/1/1 (I also recommend the whole presentation to any Julia user - it is excellent). And you can see on it that Julia after lowering AST applies type inference step to specialize function call before LLVM codegen step.
You can hint Julia compiler to avoid specialization. This is done using #nospecialize macro on Julia 0.7 (it is only a hint though).

Create constants visible across packages, accessible directly

I would like to define my Error Codes in a package models.
error.go
package models
const{
EOK = iota
EFAILED
}
How can I use them in another package without referring to them as models.EOK. I would like to use directly as EOK, since these codes would be common across all packages.
Is it the right way to do it? Any better alternatives?
To answer you core question
You can use the dot import syntax to import the exported symbols from another package directly into your package's namespace (godoc):
import . "models"
This way you could directly refer to the EOK constant without prefixing it with models.
However I'd strongly advice against doing so, as it generates rather unreadable code. see below
General/style advice
Don't use unprefixed export path like models. This is considered bad style as it will easily globber. Even for small projects, that are used only internally, use something like myname/models. see goblog
Regarding your question about error generation, there are functions for generating error values, e.g. errors.New (godoc) and fmt.Errorf (godoc).
For a general introduction on go and error handling see goblog
W.r.t. the initial question, use a compact package name, for example err.
Choosing an approach to propagating errors, and generating error messages depends on the scale and complexity of the application. The error style you show, using an int, and then a function to decode it, is quite C-ish.
That style was partly caused by:
the lack of multiple value returns (unlike Go),
the need to use a simple type (to be easily propagated), and
that gets translated to text with a function (unlike Go's error interface), so that the local language strings can be changed.
For small apps with simple errors strings. I put the packages' error strings at the head of a package file, and just return them, maybe using errors.New(...), or fmt.Errorf if the string needs to be completed using some data.
That 'int' style of error reporting doesn't offer something as flexible as Go's error interface. The error interface lets us build information-rich error structures, to return useful information, and not just an int value or string.
An implication is different packages can yield different real-types which implement the Error interface. We don't need to agree a single error real-type across an entire set of packages. So error is an interface which can be easily propagated, like an int, yet, the real-type of error can be much richer than an int. Error generation (implementing Error) can be as centralised or distributed as we need, unlike strerror()-style functions which can be awkward to extend.

What is the best way to manage a large quantity of constants

I am currently working on a very complex program that processes rows from an input table and has a huge number of possible outcomes for each record. Because of this I have a very large number of constants defined for the outcome messages. There is one success message for the record, but a multitude of possible warnings and errors.
My first thought was to define all of my constants for these messages at the package body level, but then I decided to move each constant to the procedure where it is used. I'm now second guessing that decision and thinking of moving everything back to package body level. What is the best way to define this many constants? Ease of maintainability is my ultimate goal for this program since it is so complex.
I think this is a matter of taste. In my application I put all error codes into an Error-Package. All main and commonly used constants I put into a separate package (without a package body).
Again, a matter of taste, but I tend to put a list of named constants at the package spec level rather than the package body so that they can be referenced by any portion of the application. If I ever want to change the error code that c_err_for_specific_reason_x uses, it becomes a single place to do so.
If I wanted to hide the codes and put them within the body I would have a get_error_code(p_get_error_name varchar) function that did the translation based on you passing a valid constant name.
I've done both on different projects, but tend towards the list over the function most times. I tend to use the function if it a table-driven source of the data.
It ... wait for it ... depends!
Since you currently define your constants in the package body, you don't need them to be publicly accessible outside the package. So defining them in a spec really doesn't buy you anything.
Here's is the rule I follow: Define constants within the smallest scope needed. So if a constant is used only within one procedure, define it in that procedure. If it is used within more than one procedure, define it in the body. If it is used elsewhere by code in other packages (or non-packaged SPs) but only when using a particular package, define it in the spec of that package. If it is used by other code for general use, put it in a separate spec of such general constants.

what is the use of erlang compile options: "-compile({parse_transform, ms_transform})".?

As the title, does anybody could explain the use of parse_transform with ms_transform?
what the different between with it and without it ?
The -compile({parse_transform, ms_transform}). syntax invokes a parse transform.
A parse transform is a module which the compiler calls after the file or input has been parsed. The module is called with the full abstract syntax of the whole module and must return a new abstract for a whole module. The parse transform is allowed to do whatever it wants as long as the result is legal erlang syntax. It is like a super macro facility which works on the whole module not just single function calls. The resulting module is then compiled. You can have many parse transforms.
Parse transforms are typically used to do compile-time evaluation and code transformations. The ets:fun2ms call mentioned by #P_A is a typical example of this as it takes a fun and at compile-time transforms this into a match specification, see Matchspecs and ets:fun2ms. But parse transforms allow you to do much more, for example add and remove functions. An example of this is a parse transform which generates access functions for all the fields in a record.
It is a very powerful tool, but unfortunately easy to get wrong and so create a real mess. There are, however, some 3rd party support tools which can be very helpful.
ms_transform module implements parse_transform that translates fun syntax into match specifications. For example ets:fun2ms fun uses it.
Also you can use
-include_lib("stdlib/include/ms_transform.hrl").

How to go about adding a symbol table interface to boost::spirit::lex based lexer?

To implement support for typedef you'd need to lookup the symbol table when ever the lexer identifies a identifier and return a different token. This is easily done in flex lexer. I am trying to use boost Spirit to build the parser and looked about in the examples but none of them are passing any context information between the lexer and parser. What would be the simplest way to do this in the mini c compiler tutorial example?
That's equally easy in Spirit.Lex. All you need is the ability to invoke code after matching a token, but before returning the token to the parser. That's lexer semantic actions:
this->self += identifier[ lex::_tokenid = lookup(lex::_val) ];
where lex::_tokenid is a placeholder referring to the token id of the current token, lex::_val refers to the matched token value (at that point most probably this is a iterator_range<> pointing to the underlying input stream), and lookup is a lazy function (i.e. function object, such as a phoenix::function) implementing the actual lookup logic.
I'll try to find some time to implement a small example to be added to Spirit demonstrating this technique.
To implement support for typedef you'd need to lookup the symbol table when ever the lexer identifies a identifier and return a different token.
Isn't that putting the cart before the horse? The purpose of a lexer is to take text input and turn it into a stream of simple tokens. This makes the parser easier to specify and deal with, as it doesn't have to handle low-level things like "these are the possible representations of a float" and such.
The language-based mapping of an identifier token to a symbol (ie: typedef) is not something that a lexer should be doing. That's something that happens at the parsing stage, or perhaps even later as a post-process of an abstract syntax tree.
Or, to put it another way, there is a good reason why the qi::symbols is a parser object and not a lexer one. It simply isn't the lexer's business to handle this sort of thing.
In any case, it seems to me that what you want to do is build a means to (in the parser) map an identifier token to an object that represents the type that has been typedef'd. A qi::symbols parser seems to be the way to do this kind of thing.

Resources