How to launch an external procedure in OpenEdge Progress-4GL - include

While learning about OpenEdge Progress-4GL, I stumbled upon running external procedures, and I just read following line of code, describing how to do this:
RUN p-exprc2.p.
For a person with programming experience in C/C++, Java and Delphi, this makes absolutely no sense: in those languages there is a bunch of procedures (functions), present in external files, which need to be imported, something like:
filename "file_with_external_functions.<extension>"
===================================================
int f1 (...){
return ...;
}
int f2 (...){
return ...;
}
filename "general_file_using_the_mentioned_functions.<extension>"
=================================================================
#import file_with_external_functions.<extension>;
...
int calculate_f1_result = f1(...);
int calculate_f2_result = f2(...);
So, in other words: external procedures (functions) mean that you make a list of procedures (functions), you put all of them and in case needed, you import that file and launch the procedure (function) when you need it.
In Progress 4GL, it seems you are launching the entire file!
Although this makes no sense at all in C/C++, Java, Delphi, I believe this means that Progress procedure files (extension "*.p") only should contain one procedure, and the name of the file is then the name of that procedure.
Is that correct and in that case, what's the sense of the PERSISTENT keyword?
Thanks in advance
Dominique

There are a lot of options to the RUN statement: https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dvref%2Frun-statement.html%23
But, in the simple case, if you just:
RUN name.p.
You are invoking a procedure. It might be internal, "super", "persistent" or external. It could also be an OS DLL.
The interpreter will first search for an internal procedure with that name. Thus:
procedure test.p.
message "yuck".
end.
run test.p.
Will run the internal procedure "test.p". A "local" internal procedure is defined inside the same compilation unit as the RUN statement. (Naming an internal procedure with ".p" is an abomination, don't do it. I'm just showing it to clarify how RUN resolves names.)
If a local internal procedure is not found then the 4gl interpreter will look for a SESSION SUPER procedure with that name. These are instantiated by first running a PERSISTENT procedure.
If no matching internal procedure or SUPER procedure is found the 4gl will search the PROPATH looking for a matching procedure (it will first look for a compiled version ending with .r) and, if found, will RUN that.
There are more complex ways to run procedures using handles and the IN keyword. You can also pass parameters and "compile on the fly" arguments. The documentation above gets into all of that. My answer is just covering a simple RUN name.p.

Progress was originally implemented as a procedural language which did it's thing by running programs. That's what you're seeing with the "run" statement.
If one was to implement this in OO, it'd look something like this:
NEW ProgramName(Constructor,Parameter,List).
Progress added support for OO development which does things in a way you seem more familiar with.

Related

How to test my dll file written in fortran?

I have written a Fortran code for being compiled as a '*.DLL' file.
The program which reads that file is a Finite Elements Method software named Plaxis, I already achieved to generate the '*.DLL' file in Visual Studio and Plaxis recognizes my model but the model does not work fine.
I would like to evaluate all the variables involved in my code and the procedure that Plaxis is using to read them, but when I use commands like "write(*,*) 'variable'" Plaxis does not show me what I asked in the source code.
Probably you want to open a file and write to that for debug logging, because presumably Plaxis doesn't run with standard output connected to anything useful. Or maybe it would if you just ran Plaxis from a command line window?
It's not going to create a dialog box for you.
But anyway, another option would might be attach to Plaxis with a debugger, and set a breakpoint in a function in your DLL. Then you can single-step your code as called by Plaxis.
Or you can write your own test callers and write unit tests for your functions, making them easy to debug. This could work well if your function just gets an array + size as args.
If instead it passes some wrapped object that you need to call special functions to deal with, then maybe make another version of your function that does just take an array so you can call it from a simple test caller.

IDL procedure won't compile

I've made an IDL procedure which is very simple and I'd like to use in my other codes. I've put the path in my directory and I can't see a name conflict anywhere.
When I try to run the procedure from another it tells me
!PATH=!PATH+':'+Expand_Path('+~/example/')
But when I try to find the procedure using "Findpro" I get
Procedure CHLOADCT found in directory /data/clh93/colortables/CH/
So my paths are right. I don't understand why it won't find my procedure, does anyone know what's going on?
Thanks!
Christina
Things to check in situations like this:
you are consistent about whether the routine is a function or procedure, i.e., routine is procedure and you are calling it as a procedure
routine you are calling is either compiled already or the last routine in a file named the same as it with a .pro extension; the case of the filename and routine call must match or the filename must be all lowercase (just use all lowercase in filenames!)
quick check to see if my_routine.pro is in your !path:
IDL> print, file_which('my_routine.pro')
It will return an empty string if it can't find it.
make sure the file is not in your !path twice and you are getting someone else's or an old version

Sequences in Maple [Unable to Execute]

I've been working on a programme to solve any maximisation LPP using the Revised Simplex Method. I have a problem with it though as I'm trying to input a sequence to solve the problem of non-basic variables.
My code is as follows:
matmax:=proc(tableau,basic)
local pivot,T,nbv,n,m,b;
T:=evalm(tableau);
n:=coldim(T); m:=rowdim(T);
b:=evalm(basic);
print(evalm(T));
nbv:={seq(i,i=2..n-1)}minus{seq(b[i],i=1..m)};
pivot:=getpiv(T,nbv);
while not pivot=FAIL do
b[pivot[1]]:=pivot[2];
T:=evalm(gauss(col(T,pivot[2]),pivot[1])&*T);
print(evalm(T));
nbv:={seq(i,i=2,..n-1)}minus{seq(b[i],i=1..m)};
pivot:=getpiv(T,nbv);
od;
[evalm(T),evalm(b)];
end;
The gauss and getpiv commands are procedures written to work in this programme, these work fine. However upon executing this procedure Maple returns with the message "Error, (in matmax) unable to execute seq" If anyone can give me any help on how to fix this problem it would be greatly appreciated.
If you have not loaded the linalg package before calling your matxmax then commands like coldim will simplify not work and not produce the integer results for n and m that are expected when using those in the bound of the seq calls. I believe that is why your seq error occurs, because n and m are not being assigned integer intermediate results like you expect.
You could try to remedy this by either loading the package before calling matmax, with with(linalg). But that is not so robust, and there are scenarios where it might not work. The with command won't work within a procedure body, so you can't put that inside the defn of proc matmax.
You could insert a line into matmax above, say, the local declaration line, like,
uses linalg;
That would make coldim and friends from the linalg package work. Unfortunately you've used the name pivot as a local variable, and that clashes with the pivot export from linalg package. So that simple fix via uses is not quite enough. You could use some name other than pivot, and that simple uses line, sure.
My own preference would be to make everything fully explicit, so that later on you or anyone else reading the code can understand it more clearly, even if it's longer. So I would use linalg[coldim] instead of coldim, and so on for the other exports uses from the linalg package within matmax.
Having said all the above, you should know that the linalg package is deprecated in modern Maple and that LinearAlgebra is the new package that offers the functionality you seem to be using. The command names are longer, but using the newer package means that you don't need all those evalm calls (or anything like it).
The problem could lie in your gauss and getpiv commands as they may not work with your procedure, could you expand on what they do?

How can I read functions and procedures body/ddl while they are in a package?

TASK:
Move all the functions and procedures in packages to the current Oracle schema. (you can imagine a case when you could need that, if not - take it like a challenge!)
QUESTION:
How can I read the functions/procedure "body" while they are in the package? I know that I can use all_source, dba_source and others to get the package body lines, but this means that I have to parse all those rows/strings - it should be an easier way. Isn't it?
If you have access to Toad, it does this very well.
Also, look at DBMS_METADATA package, specifically, the GET_DDL procedure.
Hope that helps.
Why exactly do you need this?
Are you just trying to execute the functions and procedures as if they were defined in your schema? If so, then invoker's rights may help.
Are you doing this for testing? If so, take a look at this answer: Is there a way to access private plsql procedures for testing purposes? (summary: use conditional compilation to optionally make functions and procedures public)
If you really need to break the packages down to functions and procedures you'll need to do it manually if you want to be 100% accurate.
There are many potential problems with just reading the source and trying to do it automatically. What about package variables, types, initialization, security (can every function be public?), procedures within procedures, duplicate names, wrapped source, etc.

Software engineering with Ada: stubs; separate and compilation units [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm with a mechanical engineering background but I'm interested to learn good software engineering practice with Ada. I have a few queries.
Q1. If I understand correctly then someone can just write a package specification (ads) file, compile it and then compile the main program which is using the package. Later on, when one knows what to include in the package body then the latter can be written and compiled. Afterwards, the main program can now be run. I've tried this and I would like to confirm that this is good practice.
Q2. My second question is about stubs (sub-units) and the use of SEPARATE. Say I have a main program as follows:
WITH Ada.Float_Text_IO;
WITH Ada.Text_IO;
WITH Ada.Integer_Text_IO;
PROCEDURE TEST2 IS
A,B : FLOAT;
N : INTEGER;
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS SEPARATE;
BEGIN -- main program
INPUT(A,B,N);
Ada.Float_Text_IO.Put(Item => A);
Ada.Text_IO.New_line;
Ada.Integer_Text_IO.Put(Item => N);
END TEST2;
Then I have the procedure INPUT in a separate file:
separate(TEST2)
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
BEGIN
Ada.Float_Text_IO.Get(Item => A);
Ada.Text_IO.New_line;
Ada.Float_Text_IO.Get(Item => B);
Ada.Text_IO.New_line;
Ada.Integer_Text_IO.Get(Item => N);
END INPUT;
My questions:
a) AdaGIDE suggests me to save the the INPUT procedure file as input.adb. But then on compiling the main program test2, I get the warning:
warning: subunit "TEST2.INPUT" in file "test2-input.adb" not found
cannot generate code for file test2.adb (missing subunits)
To AdaGIDE, this is more of an error as the above warnings come before the message:
Compiling...
Done--error detected
So I renamed the input.adb file to test2-input.adb as was suggested to me by AdaGIDE on compiling. Now on compiling the main file, I don't have any warnings. My question now is if it's ok to write
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
as I did in the sub-unit file test2-input.adb or is it better to write a more descriptive term like
PROCEDURE TEST2-INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
to emphasize that procedure input has a parent procedure test2 ? This thought follows from AdaGIDE hinting me about test2-input.adb as I mentioned above.
b) My next question:
If I understand well the compilation order, then I should compile the main file test2.adb first and then the stub test2-input.adb . On compiling the stub I get the error message:
cannot generate code for file test2-input.adb (subunit)
Done--error detected
However, I can now do the binding and linking for test2.adb and run the program .
I would like to know if I did wrong by trying to compile the stub test2-input.adb or should it not be compiled?
Q3. What is the use of having subunits? Is it just to break a large program into smaller parts? I know an error arises if one doesn't put any statements between BEGIN and END in the subunit. So this means that one always has to put a statement there. And if one wants to write the statements later, one can always put a NULL statement between between BEGIN and END in the subunit and comes back to the latter at a later time. Is this how software engineering is done in practice?
Thanks a lot...
Q1: That is excellent practice.
And by treating the package specification as a specification, you can provide it to other developers so that they will know how to interface to your code.
Q2: I believe that AdaGIDE actually uses the GNAT compiler for all compilation, so it's actually GNAT that is in charge of the acceptable filenames. (This can be configured, but unless you have a very compelling reason to do so, it is far simpler to simply go with GNAT/AdaGIDE's file naming conventions.) More pertinent to your question, though, there's no strong reason to include the parent unit as part of the separate unit's name. But see the answer to Q3...
Q3: Subunits were introduced with the first version of Ada--Ada 83--in part to help modularize code, and allow for deferred development and compilation. However, Ada software development practice has pretty much abandoned the use of subunits, all the procedure/function/task/etc bodies are simply all maintained in the body of the package. They are still used in some areas, like if a platform-specific version of subprogram may be needed, but for the most part they're rarely used. It leaves fewer files to keep track of, and keeps the implementation code of a package all together. So I strongly recommend you simply ignore the subunit capabilities and place all your implementation code in package bodies.
It's pretty normal to split a problem up into component parts (packages), each supporting a different aspect. If you've learnt Ada, it'd be normal to write the specs of the packages first, argue (perhaps with yourself) why that's the right design, and then implement them. And this would be normal, I think, in any language that supports specs and bodies - for example, C.
Personally I would do check compilations as I went, just to make sure I'm not doing anything stupid.
As for separates - one (not very good) reason is to reduce clutter, to stop the unit getting too long. Another reason (for a code generator I wrote) was so that the code generator didn't need to worry about preserving developers' hand-written code in the UML model; all code bodies were separates. A third might be for environment-dependent implementation (eg, Windows vs Unix), where you'd let the compiler see a different version of the separate body for each environment (people normally use library packages for this, though).
Compilers have their own rules about file names, and what order things can be compiled in. When GNAT sees
procedure Foo is
procedure Bar is separate;
it expects to find Foo's body in a file named foo.adb and Bar's body in foo-bar.adb (you can, I believe, tell it different - gnatmake's package Naming - but it's probably not worth the trouble). It's best to go with the flow here;
separate (Foo)
procedure Bar is
is clear enough.
You can compile foo-bar.adb, and that will do a full analysis and catch almost all errors in the code; but GNAT can't generate code for this on its own. Instead, when you compile foo.adb it includes all the separate bodies in the one generated object file. It certainly isn't wrong to do this.
With GNAT, there's no need to worry about compilation order, you can compile in any order you like. But it's best to use gnatmake and let the computer take the strain!
You can indeed work the way you describe, except of course your program won't link until all of the package bodies have some kind of implementation. For that reason, I think it is more normal to write a dummy package body with all procedures implemented as:
begin
null;
end;
And all functions implemented as something like:
begin
return The_Return_Type'first; --'
end;
As for separates...I don't like them. For me I'd much rather be able to follow the rule that all the code for a package is in its package body. Separates are marginally acceptable if for some reason the routine is huge, but in that case a better solution is almost always to refactor your code. So any time I see one, it is a big red flag.
As for the file name thing, this is a gnat issue, not an Ada issue. Gnat took the unusual position for a compiler that the name of the contents of a file dictate what the file itself must be named. There are probably other compilers in the world that do that, but I've yet to find one in 30 years of coding.

Resources