I am new with Exasol and haven't found out how one can declare a variable as in SQL. In SQL I would write:
DECLARE #variable_name datatype [ = initial_value ]
And in Exasol? Many thanks in advance for the help.
In Exasol a stored procedure is described as a 'SCRIPT'. Script variables are typed dynamically.
You can use script variables in the following way:
local msg='Hello'
https://docs.exasol.com/content/database_concepts/scripting/general_script_language.htm
In ExaPlus (user interface) you could use "define".
Example
define msg='Hello';
https://optimumretrieval.wordpress.com/2016/12/15/using-variables-in-exaplus/
There is such feature in ExaPlus client. It's described in User Manual.
But you have to keep a few things in mind:
ExaPlus GUI is no longer available starting from Exasol 6.1;
It is purely client-side feature. There are no variables in other clients or DBMS itself;
But you may still build your query normally with placeholders pointing to variables using any programming language.
Related
While learning about OpenEdge Progress-4GL, I stumbled upon running external procedures, and I just read following line of code, describing how to do this:
RUN p-exprc2.p.
For a person with programming experience in C/C++, Java and Delphi, this makes absolutely no sense: in those languages there is a bunch of procedures (functions), present in external files, which need to be imported, something like:
filename "file_with_external_functions.<extension>"
===================================================
int f1 (...){
return ...;
}
int f2 (...){
return ...;
}
filename "general_file_using_the_mentioned_functions.<extension>"
=================================================================
#import file_with_external_functions.<extension>;
...
int calculate_f1_result = f1(...);
int calculate_f2_result = f2(...);
So, in other words: external procedures (functions) mean that you make a list of procedures (functions), you put all of them and in case needed, you import that file and launch the procedure (function) when you need it.
In Progress 4GL, it seems you are launching the entire file!
Although this makes no sense at all in C/C++, Java, Delphi, I believe this means that Progress procedure files (extension "*.p") only should contain one procedure, and the name of the file is then the name of that procedure.
Is that correct and in that case, what's the sense of the PERSISTENT keyword?
Thanks in advance
Dominique
There are a lot of options to the RUN statement: https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dvref%2Frun-statement.html%23
But, in the simple case, if you just:
RUN name.p.
You are invoking a procedure. It might be internal, "super", "persistent" or external. It could also be an OS DLL.
The interpreter will first search for an internal procedure with that name. Thus:
procedure test.p.
message "yuck".
end.
run test.p.
Will run the internal procedure "test.p". A "local" internal procedure is defined inside the same compilation unit as the RUN statement. (Naming an internal procedure with ".p" is an abomination, don't do it. I'm just showing it to clarify how RUN resolves names.)
If a local internal procedure is not found then the 4gl interpreter will look for a SESSION SUPER procedure with that name. These are instantiated by first running a PERSISTENT procedure.
If no matching internal procedure or SUPER procedure is found the 4gl will search the PROPATH looking for a matching procedure (it will first look for a compiled version ending with .r) and, if found, will RUN that.
There are more complex ways to run procedures using handles and the IN keyword. You can also pass parameters and "compile on the fly" arguments. The documentation above gets into all of that. My answer is just covering a simple RUN name.p.
Progress was originally implemented as a procedural language which did it's thing by running programs. That's what you're seeing with the "run" statement.
If one was to implement this in OO, it'd look something like this:
NEW ProgramName(Constructor,Parameter,List).
Progress added support for OO development which does things in a way you seem more familiar with.
Trying to use BackendListener and observed runtime variables are not written to influxDB.
Predefined variables and properties, on the other hand, can be written to influx.
So I can separate test results by some ID by setting measurement=${__P(SOME_ID)}
What I'm looking for is a splitting results by thread group name, as I may have up to several dozens of them within the same test.
Tried to use following:
TAG_scenarioName=${__threadGroupName}
TAG_someJmeterVar=${SOME_JMETER_VAR}
TAG_someJmeterVarAsGroovy=${__groovy(vars.get("SOME_JMETER_VAR"),)}
eventTags=${__threadGroupName} testTitle=${__threadGroupName} (this
one makes less sense, but still..)
and none of those works
Those are works:
- TAG_injectorName=${__machineName()}
- TAG_predefinedVar=${USER_DEFINED_VAR} (I believe this is thanks to this)
So as I understand problem is with runtime variables only. Is it possible to make runtime variables accessible for the BackendListener? Or maybe there is some workaround for such case?
p.s. opened an enh for this
Based on your inputs. Please try with "sample variables"
These are defined in user.properties like:-
sample_variables=FileName,retHREF,PageID,Redirect,StatusCode
So, put all your variables in the sample_variables, restart jmeter and try. Please check if this helps.
This is not possible as of JMeter 5.1.1 (last version at time of this answer).
It's a feature request which is not implemented yet:
https://bz.apache.org/bugzilla/show_bug.cgi?id=57962
Can someone provide a better explanation of the xdmp:eval() and xdmp:value() functions?
I had tried to follow the Developer API. However, I am not really satisfied with the instances and it's a bit vague for me. I would really appreciate if someone could help me understand those functions and their differences with examples.
Both functions are for executing strings of code dynamically, but xdmp:value is evaluated against the current context, such that if you have variables defined in the current scope or modules declared, you can reference them without redeclaring them.
xdmp:eval necessitates the creation of an entirely new context that has no knowledge of the context calling xdmp:eval. One must define a new XQuery prolog, and variables from the main context are passed to the xdmp:eval call as parameters and declared as external variables in the eval script.
Generally, if you can use xdmp:value, it's probably the best choice; however, xdmp:eval has some capabilities that xdmp:value doesn't, namely everything defined in the <options> argument. Through these options, it's possible to control the user executing the query, the database it's executed against, transaction mode, etc.
There is another function for executing dynamic strings: xdmp:unpath, and it's similar to xdmp:value, but more limited in that it can only execute XPath.
This should be easy to answer, but I couldn't find exactly what I was asking on google/stackoverflow.
I have a bash script with 18 functions (785 lines)- ridiculous, I know I need to learn another language for the lengthy stuff. I have to run these functions in a particular order because the functions later in the sequence use info from the database and/or text files that were modified by the functions preceding. I am pretty much done with the core functionality of all the functions individually and I would like a function to run them all (One ring to rule them all!).
So my questions are, if I have a function like so:
function precious()
{
rings_of #Functions in Sequence
elves #This function Modifies DB
men #This function uses DB to modify text
dwarves #This function uses that modified text
}
Would variables be carried from one function to the next if declared like so? (inside of a function):
function men()
{
...
frodo_sw_name=`some DB query returning the name of Frodo's sword`
...
}
Also, if the functions are called in a specific order, as seen above, will Bash wait for one function to finish before starting the next? - I am pretty sure the answer is yes, but I have a lot of typing to do either way, and since I couldn't find this answer quickly on the internet, I figured it might benefit others to have this answer posted as well.
Thanks!
Variables persist unless you run the function in a subshell. This would happen if you run it as part of a pipeline, or group it with (...) (you should use { ... } instead for grouping if you don't want to create a subshell.
The exception is if you explicitly declare the variables in the function with declare, typeset, or local, which makes them local to that function rather than global to the script. But you can also use the -g option to declare and typeset to declare global variables (this would obviously be inappropriate for the local declaration).
See this tutorial on variable scope in bash.
Commands are all run sequentially, unless you deliberately background them with & at the end. There's no difference between functions and other commands in this regard.
TASK:
Move all the functions and procedures in packages to the current Oracle schema. (you can imagine a case when you could need that, if not - take it like a challenge!)
QUESTION:
How can I read the functions/procedure "body" while they are in the package? I know that I can use all_source, dba_source and others to get the package body lines, but this means that I have to parse all those rows/strings - it should be an easier way. Isn't it?
If you have access to Toad, it does this very well.
Also, look at DBMS_METADATA package, specifically, the GET_DDL procedure.
Hope that helps.
Why exactly do you need this?
Are you just trying to execute the functions and procedures as if they were defined in your schema? If so, then invoker's rights may help.
Are you doing this for testing? If so, take a look at this answer: Is there a way to access private plsql procedures for testing purposes? (summary: use conditional compilation to optionally make functions and procedures public)
If you really need to break the packages down to functions and procedures you'll need to do it manually if you want to be 100% accurate.
There are many potential problems with just reading the source and trying to do it automatically. What about package variables, types, initialization, security (can every function be public?), procedures within procedures, duplicate names, wrapped source, etc.