Can the return value of a function call be made symbolic so as to bypass executing that function? - klee

I want avoid doing inter-procedural symbolic execution. Perhaps have a return value that would not have any constraints and might resolve to any possible concrete value.
Is something like this even possible?
The reason I want to do this is that I want to avoid executing certain functions that have a very very big loop and dont really modify global data.

Such functionality is not implemented in Klee. If you want to achieve it, you have to perform that yourself.
The file you should look at is lib/Core/Executor.cpp. Executor handles the job to execute instructions symbolically. The specific function you need to change is called "executeCall". You can check the name of the function. If that matches your target, just set the program counter to the next instruction, and make the returned value as symbolic.
With some work of programming, I think your need is achievable.

If it is possible to manipulate the source code, you can replace those function calls with make_symbolic function where you basically create a fresh symbolic variable and return it as the return of the original function call. Or you could just override those functions to return a fresh symbolic value.

Related

Programming style: write new functions or extend existing ones

lets say, that I have a form with Image field and function ```getImageWidth()```, that checks Image width. After some time, I am adding a new Mobile Image field to my form and I am wondering, what is the best practice for writing a function to check Mobile Image width - extend the old one function ```getImageWidth()``` (by adding parameter isMobileImage) or writing a new ```getMobileImageWidth()``` function? The code for this new function is almost similar to the old one.
What are your opinions?
Thank you,
I think, that getMobileImageWidth() will be better option. For example there is topic Remove Flag Argument in Martin Fowler's blog. And in his book Refactoring: Improving the Design of Existing Code he wrote the next statement:
I dislike flag arguments because they complicate the process of understanding what
function calls are available and how to call them. My first route into an API is usually
the list of available functions, and flag arguments hide the differences in the function
calls that are available. Once I select a function, I have to figure out what values are
available for the flag arguments. Boolean flags are even worse since they don’t convey
their meaning to the reader—in a function call, I can’t figure out what true means. It’s
clearer to provide an explicit function for the task I want to do.
I think it is exactly what you need.

How can I defeat the Go optimizer in benchmarks?

In Rust, you can use the black_box function to force the compiler to
assume that the function's argument is used (forcing it not to optimize away code that generates that value)
not be able to inspect how its return value is produced (preventing it from doing constant-folding and other such optimizations).
Is there a similar facility in Go (to accomplish either task)?
Is there a similar facility in Go (to accomplish either task)?
No.
If you want to use a result: Assign to an exported global.
I believe runtime.KeepAlive is recommended, as per the following Github issue. Unfortunately, it's unclear whether anything exists for function arguments, or whether KeepAlive is even guaranteed to work.
https://github.com/golang/go/issues/27400

Setting up default overwritable constructors and destructors or other functions for a set of commands

alias cmd_name="source mainshell cmd_name"
My plan is to alias a single main script to a set of script names. Now on invocation of any script that main script would be called and it can define certain functions or constructors and destructor. Then it can check that script file if it has a constructor definition. If it has one, call that constructor else call the default one. Then source that script and then call the destructor. This would also give that script access to the default functions setup by main script. This shall work fine but these aliases can’t be exported to subshells.
To add to that, I just want these defaults functions available to that particular aliased set of commands and want those functions to destroy once command execution is complete. That’s why I can’t just write them on .bash_profile making it absolutely global.
command_name() {
# initial code
source path/to/command_name
# destructing code
}
Another option which I found was to create function for each name and call my script inside. This one is exportable too. In this way i could just encapsulate every command in a function with same name and can easily have initial codes and destroying code. Here the problem is that i can’t define any more functions inside that function and it would get really clumsy too doing everything inside a function.
Another thought I had was symbolic links, but they seem to have a limit to how many I can create to a particular script.
What should be the best way to achieve this or if its somehow an inappropriate design, can someone please explain?
IIUC you're trying to achieve the following:
A set of commands that necessarily take place in the context of the current shell rather than a new shell process.
These commands have a lot of common functionality that needs to be factored out of these commands.
The common functionality must not be accessible to the current shell.
In that case, the common code would be e.g. functions & variables that you have to explicitly unset after the command has been executed. Therefore, your best bet is to have a function per-command, have that function source the common code, and have the common code also have another function (called before you return) to unset everything.
Notes:
You can actually declare functions inside other functions, but the nested functions will actually be global - so name them uniquely and don't forget to unset them.
If they don't need to affect the current shell then you can just put them in their own file, source the common code, and not unset anything.
There is generally no limit to how many symlinks you can create to a single file. The limit on symlink chains (symlink to symlink to symlink etc.) is low.

Bash Functions Order and Timing

This should be easy to answer, but I couldn't find exactly what I was asking on google/stackoverflow.
I have a bash script with 18 functions (785 lines)- ridiculous, I know I need to learn another language for the lengthy stuff. I have to run these functions in a particular order because the functions later in the sequence use info from the database and/or text files that were modified by the functions preceding. I am pretty much done with the core functionality of all the functions individually and I would like a function to run them all (One ring to rule them all!).
So my questions are, if I have a function like so:
function precious()
{
rings_of #Functions in Sequence
elves #This function Modifies DB
men #This function uses DB to modify text
dwarves #This function uses that modified text
}
Would variables be carried from one function to the next if declared like so? (inside of a function):
function men()
{
...
frodo_sw_name=`some DB query returning the name of Frodo's sword`
...
}
Also, if the functions are called in a specific order, as seen above, will Bash wait for one function to finish before starting the next? - I am pretty sure the answer is yes, but I have a lot of typing to do either way, and since I couldn't find this answer quickly on the internet, I figured it might benefit others to have this answer posted as well.
Thanks!
Variables persist unless you run the function in a subshell. This would happen if you run it as part of a pipeline, or group it with (...) (you should use { ... } instead for grouping if you don't want to create a subshell.
The exception is if you explicitly declare the variables in the function with declare, typeset, or local, which makes them local to that function rather than global to the script. But you can also use the -g option to declare and typeset to declare global variables (this would obviously be inappropriate for the local declaration).
See this tutorial on variable scope in bash.
Commands are all run sequentially, unless you deliberately background them with & at the end. There's no difference between functions and other commands in this regard.

How can I make an external toolbox available to a MATLAB Parallel Computing Toolbox job?

As a continuation of this question and the subsequent answer, does anyone know how to have a job created using the Parallel Computing Toolbox (using createJob and createTask) access external toolboxes? Is there a configuration parameter I can specify when creating the function to specify toolboxes that should be loaded?
According to this section of the documentation, one way you can do this is to specify either the 'PathDependencies' property or the 'FileDependencies' property of the job object so that it points to the functions you need the job's workers to be able to use.
You should be able to point the way to the KbCheck function in PsychToolbox, along with any other functions or directories needed for KbCheck to work properly. It would look something like this:
obj = createJob('PathDependencies',{'path_to_KbCheck',...
'path_to_other_PTB_functions'});
A few comments, based on my work troubleshooting this:
It appears that there are inconsistencies with how well nested functions and anonymous functions work with the Parallel Computation toolkit. I was unable to get them to work, while others have been able to. (Also see here.) As such, I would recommend having each function stored in it's own file, and including those files using the PathDependencies or FileDependencies properties, as described by gnovice above.
It is very hard to troubleshoot the Parallel Computation toolkit, as everything happens outside your view. Use breakpoints liberally in your code, and the inspect command is your friend. Also note that if there is an error, task objects will contain an error parameter, which in turn will contain ErrorMessage string, and possibly the Error.causes MException object. Both of these were immensely useful in debugging.
When including Psychtoolbox, you need to do it as follows. First, create a jobStartup.m file with the following lines:
PTB_path = '/Users/eliezerk/Documents/MATLAB/Psychtoolbox3/';
addpath( PTB_path );
cd( PTB_path );
SetupPsychtoolbox;
However, since the Parallel Computation toolkit can't handle any graphics functionality, running SetupPsychtoolbox as-is will actually cause your thread to crash. To avoid this, you need to edit the PsychtoolboxPostInstallRoutine function, which is called at the very end of SetupPsychtoolbox. Specifically, you want to comment out the line AssertOpenGL (line 496, as of the time of this answer; this may change in future releases).

Resources