interactive shell script: recursion too deep - shell

I have written an interactive shell script in KSH, there is a main menu where the options call different functions stored as code snippets on separate files.
The script works fine but after a while the script exits with 'recursion too deep'
There is no obvious pattern of when it happens, it can happen using any of the functions at any time. The only clear pattern is that the longer I use the script the more likely it is to cause the error.
There are no recursive functions in the script, so I assume that I am creating a callback loop somewhere that gets too big after a while.
Is there a function I can call that will clear any code queue that has built up? ( I'm new to shell scripting - I am remembering the clearQueue function in jquery animations )
I've tried to find the callback loop with no success so a workaround is tempting
thanks

Turns out the recursion error was because I was sourcing files too many times and exceeding the file operator limit
I rewrote it so I only had to source files once and it was fine

In pseudo-code you could do something like
while true; do
print_menu
val=get_input
case $val in
1)
do_task_1
;;
; etc...
end
done
Once do_task_1 is done, the function or script or whatever it is returns, and the case statement ends, and the loop iterates printing the menu and getting the input again.

Related

How can a Rego script call a shell script?

I'd like to call a shell script from within a Rego script.
How can I do it?
The rego built-in functions don't seem to help.
You can't. Rego isn't a general purpose programming language, and policy evaluation should ideally be free of side effects — i.e evaluating the same policy twice with identical input should render identical results. Best alternative is likely to execute your shell script first, and provide the result as input to OPA. If you really want to run a shell script from inside your policy, a custom built-in function would be the way to go.

accelerate Tcl eval

I'm currently writing a Tcl-based tool for symbolic matrix manipulation, but the code is getting slow. I'm looking for ways to accelerate my Tcl code (Tcl version 8.6).
I have one suspicion. My code builds lists with a command name as the first element and command arguments as the following elements (this comes from emulating an object-oriented approach). I use eval to invoke these commands (and this is done often in the recursive processing). I read at https://wiki.tcl-lang.org/page/eval and https://wiki.tcl-lang.org/page/Tcl+Performance that eval may be slow.
I have three questions:
What would be the fastest way to invoke a command from a list with command name and parameters which is constructed just beforehand?
Would it accelerate the code to separate the command name myCmd and the parameter list myPar and invoke the command with [$myCmd {*}$myPar] instead (suggested at https://stackoverflow.com/a/27619692/3852630)?
Is the trick with if 1 instead of eval still promising in 8.6?
Thanks a lot for your help!
Above all, don't assume: time it to be sure. Be aware when timing things that repeatedly running a thing may change the time it takes to run it (as caches warm up). Think carefully about what you want to actually get the speed of.
The eval command is usually slow, but not in all cases. If you give it a list that you've constructed (e.g., with list or linsert or lappend or…) then it's fairly fast as it can avoid reparsing the input; it knows, but only in that case, that it can skip straight to dispatching to the command implementation. The other case that is fast is when you give it a value that was previously given to eval; the bytecode is already built and cached. These notes also apply with uplevel.
Doing $myCmd {*}$myParameters is fairly fast too; that's bytecoded into “assemble the words on the Tcl operand stack and do the right command dispatch” which is very close to what it would be for an arbitrary user command anyway (which very rarely have direct bytecode implementations).
I'd expect things with if 1 to be very quick in some cases and very slow in others; it forces full compilation, so if things can be cached well then that will be fast and if things can't it will be slow. And if you're just calling a command, it won't make much difference at all at best. The cases where it wins are when the thing being called is itself a bytecoded command and where you can cache things correctly.
If you're dealing with an ordinary command (e.g., a procedure, or one of Tcl's commands that touch the OS), I'd go with option 2: $myCmd {*}$myParameters or variants on it. It's about as fast as you're going to get. But I would not do:
set myParameters [linsert $myOriginalValues 0 "literal1" [cmdOutput2] $value3]
$myCmd {*}$myParameters
That's ridiculous. This is clearer and cleaner and faster:
$myCmd "literal1" [cmdOutput2] $value3 {*}$myOriginalValues
Part of the point of expansion syntax ({*}) is that you don't need to do complex argument marshalling, and that's good because complexity is hard to get right all the time.
A note about K and unsharing objects
Avoid copying data in memory. Change
set mylist [linsert $mylist 0 some new content]
to
set mylist [linsert $mylist[set mylist ""] 0 some new content]
This dereferences the value of the variable and then sets the variable to
the empty string. This reduces the variable's reference count.
See also https://stackoverflow.com/a/64117854/7552

Refactor eval(some_variable).is_a?(Proc) to not use eval

I have some old code that looks like:
some_variable = "-> (params) { Company.search_by_params(params) }"
if eval(some_variable).is_a?(Proc)
...
Rubocop is complaining about the use of eval. Any ideas on how to remove the usage of eval?
I don't really understand Procs so any guidance on that would be appreciated.
Simple. Don't define your variable object as a string but as a lambda Proc
my_lamda = -> (params) { Company.search_by_params(params) }
if my_lambda.is_a?(Proc)
#do stuff
end
But why would you instantiate a string object which contains what appears to be a normal lambda which is a Proc, when you can define a Proc instead?
I am going to answer the question "If I want to run code at a later time, What is the difference between using a proc and a eval'd string?" (which I think is part of your question and confusion):
What eval does is take a string and parses it to code, and then runs it. This string can come from anywhere, including user input. But eval is very unsafe and problematic, especially when used with raw user input.
The problems with eval are usually:
There is almost always a better way to do it
Very dangerous and insecure
Makes debugging difficult
Slow
Using eval allows full control of the ruby process, and if you have high permissions given to the ruby process, potentially even root acmes to the machine. So the general recommendation is use 'eval' only if you absolutely have no other options, and especially not with user input.
Procs/lambdas/blocks also let you save code for later, (and solve most of the problems with eval, they are the "better way") but instead of storing arbitrary code as a string to read later, they are code already, already parsed and ready to go. In someways, they are methods you can pass around later. Making a proc/lambda gives you an object with a #call method. Then when you later want to run the proc/block/lambda, you call call([arguments...]). What you can't do with procs though is let users write arbitrary code (and generally that's good). You have to write the code for the proc in a file ruby loads (most of the time). Eval does get around that, but you really should rethink if you really want that to be possible.
Your code sample oddly combines both these methods: it evaluates a string to a lambda. So what's happening here is eval is running the code in the string right away and returning the (last) result, which in this case happens to be a lambda/proc. (Note that this would happen every time you ran eval, which would result in multiple copies of the proc, with different identities, but the same behavior). Since the code inside the string happens to make a lambda, the value returned is a Proc which can later be #call'd. So when eval is run, the code is parsed, and a new lambda is created, with the code in the lambda stored to be run at a later time. If the code inside the string did not create a lambda, the all that code would be run immediately when eval was called with the string.
This behavior might be desired, but there is probably a better way to do this, and this is definitely a foot-gun: there are at least a half dozen subtle ways this code could do unintended things if you weren't really careful with it.

Bash Functions Order and Timing

This should be easy to answer, but I couldn't find exactly what I was asking on google/stackoverflow.
I have a bash script with 18 functions (785 lines)- ridiculous, I know I need to learn another language for the lengthy stuff. I have to run these functions in a particular order because the functions later in the sequence use info from the database and/or text files that were modified by the functions preceding. I am pretty much done with the core functionality of all the functions individually and I would like a function to run them all (One ring to rule them all!).
So my questions are, if I have a function like so:
function precious()
{
rings_of #Functions in Sequence
elves #This function Modifies DB
men #This function uses DB to modify text
dwarves #This function uses that modified text
}
Would variables be carried from one function to the next if declared like so? (inside of a function):
function men()
{
...
frodo_sw_name=`some DB query returning the name of Frodo's sword`
...
}
Also, if the functions are called in a specific order, as seen above, will Bash wait for one function to finish before starting the next? - I am pretty sure the answer is yes, but I have a lot of typing to do either way, and since I couldn't find this answer quickly on the internet, I figured it might benefit others to have this answer posted as well.
Thanks!
Variables persist unless you run the function in a subshell. This would happen if you run it as part of a pipeline, or group it with (...) (you should use { ... } instead for grouping if you don't want to create a subshell.
The exception is if you explicitly declare the variables in the function with declare, typeset, or local, which makes them local to that function rather than global to the script. But you can also use the -g option to declare and typeset to declare global variables (this would obviously be inappropriate for the local declaration).
See this tutorial on variable scope in bash.
Commands are all run sequentially, unless you deliberately background them with & at the end. There's no difference between functions and other commands in this regard.

Inconsistency running a clojure jar from command line

I have a clojure program that at some point executes a function called db-rebuild-files-table.
This function takes a directory filename as a single string argument and calls a recursive function that descends into the directory file tree, extracts certain data from the files there and logs each file in a mysql database. The end result of this command is a "files" table populated by all files in the tree under the given directory.
What I need is to be able to run this command periodically from the shell.
So, I added the :gen-class directive in the file containing my -main function that actually calls (db-rebuild-files-table *dirname*). I run lein uberjar and generate a jar which I can then execute with:
java -jar my-project-SNAPSHOT-1.0.0-standalone.jar namespace.containing.main
Sure enough, the function runs, but in the database there only exists a single entry, for the directory *dirname*. When I execute the exact same sexp in the clojure REPL I get the right behaviour: all the file tree under *dirname* get processed.
What am I doing wrong? Why does the call (db-rebuild-files-table *dirname*) behave inconsistently when called from the REPL and when executed from the command line?
[EDIT] Whats even weirder is that I get no error anywhere. All function calls seem to work as they should. I can even run the -main function in the REPL and it updates the table correctly.
If this works in the REPL, but not when executed stand-alone, then I would guess that you may be bitten by the laziness of Clojure.
Does your code perhaps need a doseq in order to get the benefits of a side-effect (e.g. writing to your database)?
Nailed it. It was a very insidious bug in my program. I got bitten by clojure's laziness.
My file-tree function used map internally, and so produced just the first value, the root directory. For some reason I still can't figure out, when executed at the REPL, evaluation was actually forced and the whole tree seq was produced. I just added a doall in my function and it solved it.
Still trying to figure why executing something at the REPL forces evaluation though. Any thoughts?

Resources