Could anyone explain the difference between the content of the compiled files in slprj and sfprj folders? Basically what is the main intent of these directories? And what is the connection between slprj and sfprj folders and Reference Models, S-Functions and MEX files ?
When simulated, many blocks (such as the MATLAB Function Block and a Stateflow Chart) are automatically converted to C-code and compiled into an S-Function. It is the S-function that is run in the simulation. That all happens somewhat opaquely to the user.
The slprj directory (or more specifically the sub folders under it) contain the remnants of that conversion, and the files/data/etc required to run the block correctly during simulation.
Until very recently the directory was named sfprj.
The directory was first introduced with the introduction of Stateflow - hence the name sfprj.
Since then, more and more functionality (e.g. MATLAB Function and Model Reference blocks) have made use of the same directory, so recently it was renamed to slprj to reflect that it is now used by Simulink in general and not just Stateflow.
Related
Several weeks ago, SBCL updated 2.0.2 and brought the Block compilation feature. I have read this article to understand what it is.
I have a question, what's the difference between (declaim (inline 'some-function)) and Block compilation? Block compilation is automatic by the compiler?
Thanks.
Inline compilation is a specific optimization technique. A function being called is directly integrated into the calling function - usually using its source code - and then compiled.
This means that the inlined function might not be inlined only in one function, but in multiple functions.
Advantage: the overhead of calling a function disappears.
Disadvantage: the code size increases and the calling function(s) needs to be recompiled, when the inlined function changed and we want this change to become visible. Macros have the same problem.
Block compilation means that a bunch of code gets compiled together with different semantic constraints and that this enables the compiler to do a bunch of new optimizations.
Common Lisp has in the standard support for block compilation of single files. It allows the file compiler to assume that a file is such a block of code.
Example from the Common Lisp standard:
3.2.2.3 Semantic Constraints
A call within a file to a named function that is defined in the same file refers to that function, unless that function has been declared notinline. The consequences are unspecified if functions are redefined individually at run time or multiply defined in the same file.
This allows the code to call a global function and not use the symbol's function cell for the call. Thus this disables late binding for global function calls - in this file and for functions in this file.
It's not said how this can be achieved, but the compiler might just allocate the code somewhere and the calls just jump there.
So this part of block compilation is defined in the standard and some compilers are doing that.
Block compilation for multiple files
If the file compiler can use block compilation for one file, then what about multiple files? A few compilers can also tell the file compiler that several files make a block for compilation. CMUCL does that. SBCL was derived and simplified from CMUCL and lacks it until now. I think Lucid Common Lisp (which is no longer actively sold) did support something like that, too.
Might be useful to add this to SBCL, too.
My Fortran-Code is structured as follows:
There are two folders (with several subdirectories)
1.
/home/user/general_part
where some very general files are located and which should be used in several versions of the program.
files: (with relative path)
- mainsubdir/main.F
- subdir1/file1.F
- subdir1/headerfile1.h
2.
/home/user/special_part/special_case1
where the files located which are case-dependend.
files: (with relative path)
- subdir2/file2.F
- subdir2/headerfile2.h
- subdir3/file3.F
How could I organize the build-process?
Should I use several makefiles in each of the directories?
Where should the object-files be located (especially the ones from the general files)?
My aim would be that I can start the build-process from the directory:
/home/user/special_part/special_case
with a simple make or a little script.
So at the end it should be possible that I can build a program always with the general files from 1. and several special-case files located in:
/home/user/special_part/special_case1
/home/user/special_part/special_case2
...
The reason why nobody is answering, is probably because the question is too general. Be more specific.
Say something like: "this is the program I want to build, and this is my makefile, please critique my makefile".
You can organize it any way you like, as long as it's logical and consistent. I put some beginner guidelines at
https://stackoverflow.com/questions/19816058/makefile-fibonacci/19821801#19821801
No. Make is really designed and works best, with a single makefile. You can have relevant makefile fragments in each directory, which are included in the main makefile. Do not have complete makefiles in each subdirectory. Google for the classic paper "Recursive make considered harmful" to see why that is so.
You can place your result anywhere you want, some people, place results alongside sources, some, in a separate directory. Just place results in some logical and consistent way. Same goes for intermediate files, such as object files.
So, this has never happened before, but for some reason, I am unable to view a default MATLAB file. That is, a *.m file that comes with your MATLAB program, (for example 'fft', 'transpose', 'angle', etc).
For example, if I wanted to inspect how the inverse tangent was being computed, all I would do was:
open atan
Right now however, all I get is a *.m file with nothing but comments in it about the file, but no actual code.
What is going on?? I have MATLAB 2013a. I have never seen this before. Why cant I inspect how MATLAB is running certain commands?
Thanks!
This is common, for instance try edit sum, you will not be able to see the code.
When referring to MATLAB built-in functions it's usually meant exactly those functions whose implementation is not carried out with MATLAB language but embedded into the
program. Built-in functions are part of TMW know-how and therefore unavailable to the general user.
The .m file is simply for the documentation.
First of - Hello and thank you for reading this,
I have one DLL which I do not have the source code but need to add some functionalities into it.
I made up another DLL implementing all these needed functionalities in C - using Visual Studio.
Now I need to insert the generated code from this new DLL into the target DLL (it has to be done at the file level {not at runtime}).
I am probably creating a new PE section on the target DLL and put there all the code/data/rdata from the dll I made up. The problem is that I need somehow to fix the IAT and the relocs relative to this new inserted code on the target DLL.
My question is:
What is the best way to do it?
It would be nice if Visual Studio came up with an option to build using only (mostly) relative addressing - This would save me a lot when dealing with the relocs.
I guess I could encapsulate all my vars and constants into a struct, hopefully MSVC would then only need to relocate the address of this "container" struct and use relative addressing to access its members. But don't know if this is a good idea.
I could even go further and get rid of the IAT by making a function pointer which would dynamically load the needed function module (kind of the Delay Load Module). And again, put this function pointer inside the "container" struct I said before.
The last option I have is to make it all by hand, manually editing the binary in hex... which I really didn`t want to do, because it would take some good time to do it for every single IAT entry and reloc entry. I have already written a PE file encryptor some time ago so I know most of the inner workings and know it can be done, just want to know your thoughts and maybe a tool already exists to help me out?
Any suggestions is highly appreciated!
Thanks again for your time for reading this!
Since you are asking for suggestions, take a look at the very good PORTABLE EXECUTABLE FILE FORMAT – A REVERSE ENGINEER VIEW PDF Document. The Section "Adding Code to a PE File" describes some techniques (and presents Tools) to add code to an existing PE image without having the code of the target image (your scenario) by manipulation the IAT table and Sections tables.
How can I include the procedures from one Netlogo file into another? Basically, I want to separate the code of a genetic algorithm from my (quite complicated) fitness function, but, obviously, I want the fitness reporter, which will reside in "fitness.nlogo", to be available in the genetic algorithm code, probably "genetic.nlogo".
If it can be done, how are the procedures imported, and the code executed? Is it like Python, where importing a module pretty much executes everything in the module, or like C/C++, where the file is blindly "joined"?
This may be a stupid question, but I couldn't find anything on Google. The Netlogo documentation says something about __includes, an experimental keyword that may do the trick, but there's not much explained there. No example either.
Any hints? Should I go with __includes? How does it work?
To include a file you use
__includes["libfile.nls"]
After adding this and pressing the “Check” button, a new button will appear next to the Procedures drop-down menu. There you can create and manage multiple source files.
The libfile.nls is just a text file that contains NetLogo code. It is not a netlogo model, which always end in .nlogo, as a NetLogo model contains a lot of other information besides the NetLogo code.
Including a file is the equivalent of just inserting all its contents at that point. In order to make it work in a way like reusable library files, one should create procedures which use agentsets and parameters as input variables to be independent of global definitions or interface settings.
The feature is documented in the NetLogo User Manual at http://ccl.northwestern.edu/netlogo/docs/programming.html#includes.
You can create a file libfile.nls and in the same folder create your main model model.nlogo.
After that, go to your model.nlogo and write:
__includes["libfile.nls"]
This file contains your reporters and procedures that you can call in your model.