-- im sure this is a duplicate --
I read this in an O'reilly book (:
There was no reasoning though ):
in a simple AppleScript file:
script implicitRunHandlerScript
end script
run implicitRunHandlerScript
-- why does this lead to a stack overflow?
The script you posted contains a child script, named implicitRunHandlerScript, and a handler, the “implicit run handler”. The implicit run handler contains one statement:
run implicitRunHandlerScript
A child script inherits the handlers of its parent. So your implicitRunHandlerScript inherits the implement run handler of its parent. And that inherited implicit run handler calls the run handler of implicitRunHandlerScript, so it calls itself recursively.
Read Defining Script Objects and Inheritance in Script Objects in the AppleScript Language Guide.
From AppleScript: The Definitive Guide:
If a script object has no explicit run handler and has no executable
statements in its implicit run handler, telling it to run can have
unpredictable consequences (this fact is almost certainly a bug).
Related
While learning about OpenEdge Progress-4GL, I stumbled upon running external procedures, and I just read following line of code, describing how to do this:
RUN p-exprc2.p.
For a person with programming experience in C/C++, Java and Delphi, this makes absolutely no sense: in those languages there is a bunch of procedures (functions), present in external files, which need to be imported, something like:
filename "file_with_external_functions.<extension>"
===================================================
int f1 (...){
return ...;
}
int f2 (...){
return ...;
}
filename "general_file_using_the_mentioned_functions.<extension>"
=================================================================
#import file_with_external_functions.<extension>;
...
int calculate_f1_result = f1(...);
int calculate_f2_result = f2(...);
So, in other words: external procedures (functions) mean that you make a list of procedures (functions), you put all of them and in case needed, you import that file and launch the procedure (function) when you need it.
In Progress 4GL, it seems you are launching the entire file!
Although this makes no sense at all in C/C++, Java, Delphi, I believe this means that Progress procedure files (extension "*.p") only should contain one procedure, and the name of the file is then the name of that procedure.
Is that correct and in that case, what's the sense of the PERSISTENT keyword?
Thanks in advance
Dominique
There are a lot of options to the RUN statement: https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dvref%2Frun-statement.html%23
But, in the simple case, if you just:
RUN name.p.
You are invoking a procedure. It might be internal, "super", "persistent" or external. It could also be an OS DLL.
The interpreter will first search for an internal procedure with that name. Thus:
procedure test.p.
message "yuck".
end.
run test.p.
Will run the internal procedure "test.p". A "local" internal procedure is defined inside the same compilation unit as the RUN statement. (Naming an internal procedure with ".p" is an abomination, don't do it. I'm just showing it to clarify how RUN resolves names.)
If a local internal procedure is not found then the 4gl interpreter will look for a SESSION SUPER procedure with that name. These are instantiated by first running a PERSISTENT procedure.
If no matching internal procedure or SUPER procedure is found the 4gl will search the PROPATH looking for a matching procedure (it will first look for a compiled version ending with .r) and, if found, will RUN that.
There are more complex ways to run procedures using handles and the IN keyword. You can also pass parameters and "compile on the fly" arguments. The documentation above gets into all of that. My answer is just covering a simple RUN name.p.
Progress was originally implemented as a procedural language which did it's thing by running programs. That's what you're seeing with the "run" statement.
If one was to implement this in OO, it'd look something like this:
NEW ProgramName(Constructor,Parameter,List).
Progress added support for OO development which does things in a way you seem more familiar with.
alias cmd_name="source mainshell cmd_name"
My plan is to alias a single main script to a set of script names. Now on invocation of any script that main script would be called and it can define certain functions or constructors and destructor. Then it can check that script file if it has a constructor definition. If it has one, call that constructor else call the default one. Then source that script and then call the destructor. This would also give that script access to the default functions setup by main script. This shall work fine but these aliases can’t be exported to subshells.
To add to that, I just want these defaults functions available to that particular aliased set of commands and want those functions to destroy once command execution is complete. That’s why I can’t just write them on .bash_profile making it absolutely global.
command_name() {
# initial code
source path/to/command_name
# destructing code
}
Another option which I found was to create function for each name and call my script inside. This one is exportable too. In this way i could just encapsulate every command in a function with same name and can easily have initial codes and destroying code. Here the problem is that i can’t define any more functions inside that function and it would get really clumsy too doing everything inside a function.
Another thought I had was symbolic links, but they seem to have a limit to how many I can create to a particular script.
What should be the best way to achieve this or if its somehow an inappropriate design, can someone please explain?
IIUC you're trying to achieve the following:
A set of commands that necessarily take place in the context of the current shell rather than a new shell process.
These commands have a lot of common functionality that needs to be factored out of these commands.
The common functionality must not be accessible to the current shell.
In that case, the common code would be e.g. functions & variables that you have to explicitly unset after the command has been executed. Therefore, your best bet is to have a function per-command, have that function source the common code, and have the common code also have another function (called before you return) to unset everything.
Notes:
You can actually declare functions inside other functions, but the nested functions will actually be global - so name them uniquely and don't forget to unset them.
If they don't need to affect the current shell then you can just put them in their own file, source the common code, and not unset anything.
There is generally no limit to how many symlinks you can create to a single file. The limit on symlink chains (symlink to symlink to symlink etc.) is low.
I have written an interactive shell script in KSH, there is a main menu where the options call different functions stored as code snippets on separate files.
The script works fine but after a while the script exits with 'recursion too deep'
There is no obvious pattern of when it happens, it can happen using any of the functions at any time. The only clear pattern is that the longer I use the script the more likely it is to cause the error.
There are no recursive functions in the script, so I assume that I am creating a callback loop somewhere that gets too big after a while.
Is there a function I can call that will clear any code queue that has built up? ( I'm new to shell scripting - I am remembering the clearQueue function in jquery animations )
I've tried to find the callback loop with no success so a workaround is tempting
thanks
Turns out the recursion error was because I was sourcing files too many times and exceeding the file operator limit
I rewrote it so I only had to source files once and it was fine
In pseudo-code you could do something like
while true; do
print_menu
val=get_input
case $val in
1)
do_task_1
;;
; etc...
end
done
Once do_task_1 is done, the function or script or whatever it is returns, and the case statement ends, and the loop iterates printing the menu and getting the input again.
I work on a team that decided long ago to use Chef's template resource to make dynamic scripts to be executed in an execute reource block.
I know this feature was only built to generate config files and the like, but I have to work with what I've got here.
Basically I need to know how to write to Chef::Log from a Ruby script generated from a template block. The script is not in the same context as the cookbook that generated it, so I can't just call require 'chef/log' in the script. I also do not want to just append the chef-run.log because that runs into timing problems.
Is there any way to accomplish this as cleanly as possible without appending to chef-run.log?
Thank you for your time.
Chef::Log is a global, so technically you can just call its methods directly from inside a template, but this is really weird and you probably shouldn't. Do whatever logging you need from the recipe code instead. If you need to log some values, compute them in the recipe and them pass them in using variables.
I have a clojure program that at some point executes a function called db-rebuild-files-table.
This function takes a directory filename as a single string argument and calls a recursive function that descends into the directory file tree, extracts certain data from the files there and logs each file in a mysql database. The end result of this command is a "files" table populated by all files in the tree under the given directory.
What I need is to be able to run this command periodically from the shell.
So, I added the :gen-class directive in the file containing my -main function that actually calls (db-rebuild-files-table *dirname*). I run lein uberjar and generate a jar which I can then execute with:
java -jar my-project-SNAPSHOT-1.0.0-standalone.jar namespace.containing.main
Sure enough, the function runs, but in the database there only exists a single entry, for the directory *dirname*. When I execute the exact same sexp in the clojure REPL I get the right behaviour: all the file tree under *dirname* get processed.
What am I doing wrong? Why does the call (db-rebuild-files-table *dirname*) behave inconsistently when called from the REPL and when executed from the command line?
[EDIT] Whats even weirder is that I get no error anywhere. All function calls seem to work as they should. I can even run the -main function in the REPL and it updates the table correctly.
If this works in the REPL, but not when executed stand-alone, then I would guess that you may be bitten by the laziness of Clojure.
Does your code perhaps need a doseq in order to get the benefits of a side-effect (e.g. writing to your database)?
Nailed it. It was a very insidious bug in my program. I got bitten by clojure's laziness.
My file-tree function used map internally, and so produced just the first value, the root directory. For some reason I still can't figure out, when executed at the REPL, evaluation was actually forced and the whole tree seq was produced. I just added a doall in my function and it solved it.
Still trying to figure why executing something at the REPL forces evaluation though. Any thoughts?