I'm using the DyninstAPI (namely, the SymtabAPI component) to rewrite the symbol tables in binaries. I'm using the following methods to do so:
data_region->setPtrToRawData((void*) new_raw, data_region->getRegionSize())
The method returns successfully, I check my error codes, and I even re-read the data section which has successfully been replaced. The problem is that the original binary isn't rewritten with the new raw .data section, and the original raw .data section persists.
I've scoured the manual to see if there is some sort of commit function but none is documented and nothing of the sort is mentioned in the examples. EDIT: I just read through some of the source code for the Region class, and it looks like I'm essentially doing what patchData does (in case that is the method I should be using).
Suggestions?
The programming manuals are available at http://www.paradyn.org/html/manuals.html.
P.S. hopefully a more reputable user can add the tags DyninstAPI and SymtabAPI for me.
After consulting with the developers, they alerted me that the function I needed to call was emit and the syntax I ended up using was:
symtab_obj->emit("new_binary.out");
Thanks Drew!
Related
As create_proc_entry function is deprecated, what is its replacement?
I was trying to create a simple proc entry using create_proc_entry but got the this error:
error: implicit declaration of function ‘create_proc_entry’
I grepped create_proc_entry in proc_fs.h but didn't find it there. Is there something that I'm missing or there's alternative to do this?
The newer functions are named proc_*. You can see their declarations in include/linux/proc_fs.h.
In particular, proc_create creates a proc entry. You can check out the implementation of the other (quite useful) functions in the source file at fs/proc/generic.c. You may be particularly interested in proc_mkdir and proc_create_data.
Note to future visitors: Please keep the date of this post in mind. The links are to the master branch of Linux, which could change over time. If you need the interface for an older version, you can find the equivalent location for a previous commit. If you want the latest version, the suggestions in this answer could have become outdated.
I want to store brief snippets of code in the database (following a standard signature) and "inject" them at runtime. One way would be using eval(my_code). Is there some way to debug the injected code using breakpoints, etc? (I'm using Rubymine)
I'm aware I can just log to console, etc, but I'd prefer IDE-style debugging if possible.
Hm. Let's analyze your question. Firstly, it does not seem to have anything to do with databases: You simply store a code block in the source form somewhere. It can be a file, or a database. Secondly, you don't want IDE-style "debugging", but TDD-style. (But don't concentrate on that question now.)
What you need, is assertions about your code. That is, you need to describe what output should your code produce given some input examples. And then, you need to run that code and see whether its function matches the expectations. Furthermore, if you are not sure about the source of your snippets, run them in a sandbox (with $SAFE = 4). If your code fails the expectations, raise nice errors (TypeError, or even better, your custom made exception), and then you can eg. rescue those exceptions and eg. use some default code snippets...
... but maybe I'm not actually answering the same question that you are asking. If that's the case, then let me share one link to this sourcify gem, which let's you know the source, so that you can insert a breakpoint by saying eg. require 'rdebug' in the middle of code, or can even convert code to sexps. That's all I know.
I'm using GCC 4.7.2. My code is rather heavy on template, STL and boost usage. When I compile and there is an error in some class or function that is derived from or uses some boost/STL functionality, I get error messages showing spectacularly hideous return types and/or function arguments for my classes/function.
My question:
Is there a prettyprint type of thing for GCC warnings/errors containing boost/STL types, so that the return types shown in error messages correspond to what I've typed in the code, or at least, become more intelligible?
I have briefly skimmed through this question, however, that is about GDB rather than GCC...
I've also come across this pretty printer in Haskell, but that just seems to add structure, not take away (mostly) unneeded detail...
Any other suggestions?
I asked a similar question, where someone suggested I try gccfilter. It's a Perl script that re-formats the output of g++ and colorizes it, shortens it, hides full pathnames, and lots more.
Actually, that suggestion answers this question really well too: it's capable of hiding unneeded detail and pretty-printing both STL and boost types. So: I'll leave this here as an answer too.
The only drawback I could see is that g++ needs to be called from within the script (i.e., piping to it is not possible at the time). I suspect that's easily fixed, and in any case, it's a relatively minor issue.
You could try STLfilt as mentioned in 'C++ Template Metaprogramming' by David Abrahms & Alesky Gurtovoy.
The book contains a chapter on template message diagnostics. It suggests using the STLFilt /showback:N to eliminate compiler backtrace material in order to get simplified output.
I understand that over a thousand built-in rewrite rules in Mathematica populate the global rules table by default. Is there any way to get Mathematica to give a full or even partial list of those rules?
The best way is to get a job at Wolfram Research.
Failing that, I think that for things not completely compiled into the kernel you can recover most of the rules/definitions. Look at
Attributes[fn]
where fn is the command that you're interested in. If it returns
{Protected, ReadProtected}
then there's something you can get a look at (although often it's just a MakeBoxes (formatting) definition or a AutoLoad/Stub type definition). To see what's there run
Unprotect[fn];
ClearAttributes[fn, ReadProtected];
??fn
Quite often you'll have to run an example of the command to load it if it was a stub. You'll also have to dig down from the user-facing commands to the back-end implementations.
Eventually you'll most likely reach a core command that is compiled into the kernel that you can not see the details of.
I previously mentioned this in tips for creating Graph diagrams and it got a mention in What is in your Mathematica tool bag?.
An good example, with a nice bite-sized and digestible bit of code is Experimental`AngularSlider[] mentioned in Circular/Angular slider. I'll leave it up to you to look at the code produced.
Another example is something like BoxWhiskerChart, where you need to call it once in order to load all of the code. Then you see that BoxWhiskerChart proceeds to call Charting`iBoxWhiskerChart which you'll have to unprotect to look at, etc...
Let me pose a bit of background information before asking my question:
I recently joined a new software development group that uses Rational tools for configuration management, including a source control and change management system.
In addition to these tools, the team has a standard practice of noting any code changes as a comment in the code, such as:
///<history>
[mt] 3/15/2009 Made abc changes to fix xyz
///</history>
Their official purpose for the commenting standard is that "the comments provide traceability from requirement to code modification".
I am preparing to pose an argument that this practice is unnecessary and redundant; that the team should get rid of this standard immediately.
To wit - the change management system is the place to build traceability from requirement to code modification, and source control can provide detailed history of changes by performing a Diff between versions. When source code is checked in, the corresponding change management ticket is noted. When a CM ticket is resolved, we note which source code files were modified. I believe this provides a sufficient cross-reference for the desired traceability.
I would like to know if anyone disagrees with my argument. Am I missing some benefit of commented source code history that change management and source control systems cannot provide?
For myself, I have always found such comments to be more trouble than they're worth: they can cause merge conflicts, can appear as 'false positives' when you're trying to isolate the diffs between two versions, and may reference code changes that have since been obsoleted by later changes.
It's often (not always, but often) possible to change version-control systems without losing metadata. If you were to move your code to a system that doesn't support this, it would not be hard to write a script to convert the change history into comments before the cutover.
A comment allows you to find all the changes and their reasons in the code right where they are relevant without having to dig into diffs and version control system intricacies. Furthermore, should you decide to change of version control system, the comments will stay.
I worked on a large project with similar practice that had changed of source control system twice. There wasn't a day when I wasn't glad to have these comments.
Is it redundant? Yes.
Is it unnecessary? No.
I've always thought that code should be, of course, under version control, and that the current source code (the one that you can open and read today) should be valid only in present tense.
It doesn't matter if a report could have up to 3 axis in the past and last month you updated it to support up to 6 axis. It doesn't matter if you expanded some function or fixed some bug, as long as the current version can be easily understood. When you fix a bug, just leave the fixed code.
There's an exception, though. If (and only if) the fixed code looks less intuitive to you than the previous, incorrect one; if you feel that someone might come tomorrow and, just by reading the code, be tempted to change it back to what "seems more correct", then it's good to add a comment: "This is done this way to avoid... blah blah blah." Also, if the problem behind is an infamous war story inside the team's culture, or if for some reason the bug report database contains very interesting information about this part of the code, I wouldn't find it incorrect to add "(see Bug Id 10005)" to the explaining comment.
The one that jumps to mind to me is vendor lockin. If you ever moved away from Rational, you'd need to make sure that the full change history was maintained during the migration - not just the version of the artifacts.
When you're in the code you need to know why it's structured like that, hence in code commenting. Tools that sit outside the code, good though they may be, require far too much of a context shift in your brain to be useful. As well as that, trying to reverse engineer the code intent from documentation and a diff is pretty damn hard, I'd much rather read a line of comment any day.
There was a phase in the code I work on, back in the 1994-96 time frame, where there was a tendency to insert change history comments at the top of the file. Those comments are now meaningless and useless, and one of the many standard cleanups I perform when editing files containing such comments is to remove them.
In contrast, there are also some comments with a bug number at the location where the change is made, typically explaining why the ridiculous code is as it is. These can be very helpful. The bug number gives you somewhere else to look for information, and fingers the culprit (or victim - it varies).
On the other hand, items like this one - genuine; cleaned up last week - make me grit my teeth.
if (ctab->tarray && ctab->tarray[i])
#ifndef NT
prt_theargs(*(ctab->tarray[i]));
#else
/* Correct the parameter type mismatch in the line above */
prt_theargs(ctab->tarray[i]);
#endif /* NT */
The NT team got the call correct; why they thought it was a platform-specific fix is beyond me. Of course, if the code had used prototypes instead of just parameterless declarations before now, then the Unix team would have had to fix the code too. The comment was a help - assuring me that the bug was genuine - but exasperating.