Is there a straigtforward way with ready-at-hand tooling to suspend a traced process' execution when a certain syscalls are called with specific parameters? Specifically I want to suspend program execution whenever
stat("/${SOME_PATH}")
or
readlink("/${SOME_PATH}")
are called. I aim to then attach a debugger, so that I can identify which of the hundreds of shared objects that are linked into the process is trying to access that specific path.
strace shows me the syscalls alright, and gdb does the rest. The question is, how to bring them together. This surely can be solved with custom glue-scripting, but I'd rather use a clean solution.
The problem at hand is a 3rd party toolsuite which is available only in binary form and which distribution package completely violates the LSB/FHS and good manners and places shared objects all over the filesystem, some of which are loaded from unconfigurable paths. I'd like to identify which modules of the toolsuite try to do this and either patch the binaries or to file an issue with the vendor.
This is the approach that I use for similar condition in windows debugging. Even though I think it should be possible for you too, I have not tried it with gdb in linux.
When you attached your process, set breakpoint on your system call which is for example stat in your case.
Add a condition based on esp to your breakpoint. For example you want to check stat("/$te"). value at [esp+4] should point to address of string which in this case is "/$te". Then add a condition like: *(uint32_t*)[esp+4] == "/$te". It seems that you can use strcmp() in your condition too as described here.
I think something similar to this should work for you too.
GOAL: I want to force zpaq into backing up symlinks as though they were real files, possibly by fooling it (using LD_PRELOAD or some sort or FUSE system) into thinking symlinks are actual files.
I want to create/find a library that forces programs to read symlinks
as they were actual files, and then use LD_PRELOAD (or something
similar) to run the program in that environment.
In other words, when the program calls readdir() [or whatever], the
symlink appears as an actual file, and when the program calls open()
[or whatever], it opens the actual target file, not the symlink.
Is there any way to do this? The otherwise wonderful zpaq doesn't
support symlinks at the moment, and the files are on different drives,
so I can't use hard linking either.
So what is the problem? You seem to know about LD_PRELOAD already; all you need to do is write a library which exposes the correct functions and put it in LD_PRELOAD. This link explains the process a bit more verbosely if you need it.
The only potential issue is that calls to things in glibc don't always get linked to the symbols you might expect… for example, a call to write may actually call __write (if it doesn't get inlined to something even lower-level). Depending on your optimization level, some functions calls will actually be removed completely, such as memset with a fixed length. There are also checked variants for many functions if you're using _FORTIFY_SOURCE. I don't think this should be an issue for readlink, but you're probably just going to have to TIAS.
Basically, just do it. If it doesn't work, then come back to SO if you need help debugging.
On OS X, the man page for fork says this:
There are limits to what you can do in the child process. To be totally safe you should restrict yourself to only executing async-signal safe operations until such time as one of the exec functions is called. All APIs, including global data symbols, in any framework or library should be assumed to be unsafe after a fork() unless explicitly documented to be safe or async-signal safe. If you need to use these frameworks in the child process, you must exec. In this situation it is reasonable to exec yourself.
Based on the footer of the man page, this has probably been there a long time:
4th Berkeley Distribution June 4, 1993 4th Berkeley Distribution
I'd have thought that chdir(2) would be safe to call between fork() and exec(), but its man page does not say that it's safe for async calls. Is it, in fact, unsafe? If so, am I really expected to change directory before fork()? Seems unreasonable to me.
Same goes for setenv(3). Considering that it calls malloc(), I guess that it's probably not safe. But there is no equivalent of execvp that allows me to pass an environment. Specifically, execvp searches the PATH. The closest equivalent I could find that takes an environment arg, execve(), does not search the PATH.
The first block of text you're quoting ("There are limits...") was added specifically for Mac OS X -- it's trying to point out that most of Apple's libraries and frameworks (AppKit, Carbon, Foundation, etc) will not work properly after a fork().
As far as I'm aware, all standard C library functions, including chdir(), ARE safe to use after fork().
The Unix philosophy teaches that we should develop small programs that do one thing well. It also teaches that we should separate policy from mechanics. I guess one way to take this is to design a text-based shell command first and build a gui on top of that later (if desired).
I truly like the idea that small programs can be composed (piped together) into more complex systems. I also like the fact that simple, focused designs should theoretically need less maintenance than a monolithic system that binds all its rules together.
How sound would it be to program something (in Ruby or Python for example) that relegates some of its functionality to shell commands called straight from the code? Taking this a step further, does it make sense to deliberately design a shell command that is intended to be called directly from code (compiled or scripted)? Obviously, this would only make sense if the shell command had some worthy console use.
I can't say from my experience that this is a practice I've seen much of. More times than not task-specific code relies on task-specific libraries. Of course, it's possible that, unbeknownst to me, I have made use of libraries which are actually just wrappers around shell commands. (Or rather the shell command is a wrapper around some library.)
The unix paradigm is modularity. You should write your program as a bunch of modules, which can then be extracted into multiple programs if you want to. However, executing a new program whenever you'd like to make a function call is slow and unpractical.
I am aware that this is nothing new and has been done several times. But I am looking for some reference implementation (or even just reference design) as a "best practices guide". We have a real-time embedded environment and the idea is to be able to use a "debug shell" in order to invoke some commands. Example: "SomeDevice print reg xyz" will request the SomeDevice sub-system to print the value of the register named xyz.
I have a small set of routines that is essentially made up of 3 functions and a lookup table:
a function that gathers a command line - it's simple; there's no command line history or anything, just the ability to backspace or press escape to discard the whole thing. But if I thought fancier editing capabilities were needed, it wouldn't be too hard to add them here.
a function that parses a line of text argc/argv style (see Parse string into argv/argc for some ideas on this)
a function that takes the first arg on the parsed command line and looks it up in a table of commands & function pointers to determine which function to call for the command, so the command handlers just need to match the prototype:
int command_handler( int argc, char* argv[]);
Then that function is called with the appropriate argc/argv parameters.
Actually, the lookup table also has pointers to basic help text for each command, and if the command is followed by '-?' or '/?' that bit of help text is displayed. Also, if 'help' is used for a command, the command table is dumped (possible only a subset if a parameter is passed to the 'help' command).
Sorry, I can't post the actual source - but it's pretty simple and straight forward to implement, and functional enough for pretty much all the command line handling needs I've had for embedded systems development.
You might bristle at this response, but many years ago we did something like this for a large-scale embedded telecom system using lex/yacc (nowadays I guess it would be flex/bison, this was literally 20 years ago).
Define your grammar, define ranges for parameters, etc... and then let lex/yacc generate the code.
There is a bit of a learning curve, as opposed to rolling a 1-off custom implementation, but then you can extend the grammar, add new commands & parameters, change ranges, etc... extremely quickly.
You could check out libcli. It emulates Cisco's CLI and apparently also includes a telnet server. That might be more than you are looking for, but it might still be useful as a reference.
If your needs are quite basic, a debug menu which accepts simple keystrokes, rather than a command shell, is one way of doing this.
For registers and RAM, you could have a sub-menu which just does a memory dump on demand.
Likewise, to enable or disable individual features, you can control them via keystrokes from the main menu or sub-menus.
One way of implementing this is via a simple state machine. Each screen has a corresponding state which waits for a keystroke, and then changes state and/or updates the screen as required.
vxWorks includes a command shell, that embeds the symbol table and implements a C expression evaluator so that you can call functions, evaluate expressions, and access global symbols at runtime. The expression evaluator supports integer and string constants.
When I worked on a project that migrated from vxWorks to embOS, I implemented the same functionality. Embedding the symbol table required a bit of gymnastics since it does not exist until after linking. I used a post-build step to parse the output of the GNU nm tool for create a symbol table as a separate load module. In an earlier version I did not embed the symbol table at all, but rather created a host-shell program that ran on the development host where the symbol table resided, and communicated with a debug stub on the target that could perform function calls to arbitrary addresses and read/write arbitrary memory. This approach is better suited to memory constrained devices, but you have to be careful that the symbol table you are using and the code on the target are for the same build. Again that was an idea I borrowed from vxWorks, which supports both teh target and host based shell with the same functionality. For the host shell vxWorks checksums the code to ensure the symbol table matches; in my case it was a manual (and error prone) process, which is why I implemented the embedded symbol table.
Although initially I only implemented memory read/write and function call capability I later added an expression evaluator based on the algorithm (but not the code) described here. Then after that I added simple scripting capabilities in the form of if-else, while, and procedure call constructs (using a very simple non-C syntax). So if you wanted new functionality or test, you could either write a new function, or create a script (if performance was not an issue), so the functions were rather like 'built-ins' to the scripting language.
To perform the arbitrary function calls, I used a function pointer typedef that took an arbitrarily large (24) number of arguments, then using the symbol table, you find the function address, cast it to the function pointer type, and pass it the real arguments, plus enough dummy arguments to make up the expected number and thus create a suitable (if wasteful) maintain stack frame.
On other systems I have implemented a Forth threaded interpreter, which is a very simple language to implement, but has a less than user friendly syntax perhaps. You could equally embed an existing solution such as Lua or Ch.
For a small lightweight thing you could use forth. Its easy to get going ( forth kernels are SMALL)
look at figForth, LINa and GnuForth.
Disclaimer: I don't Forth, but openboot and the PCI bus do, and I;ve used them and they work really well.
Alternative UI's
Deploy a web sever on your embedded device instead. Even serial will work with SLIP and the UI can be reasonably complex ( or even serve up a JAR and get really really complex.
If you really need a CLI, then you can point at a link and get a telnet.
One alternative is to use a very simple binary protocol to transfer the data you need, and then make a user interface on the PC, using e.g. Python or whatever is your favourite development tool.
The advantage is that it minimises the code in the embedded device, and shifts as much of it as possible to the PC side. That's good because:
It uses up less embedded code space—much of the code is on the PC instead.
In many cases it's easier to develop a given functionality on the PC, with the PC's greater tools and resources.
It gives you more interface options. You can use just a command line interface if you want. Or, you could go for a GUI, with graphs, data logging, whatever fancy stuff you might want.
It gives you flexibility. Embedded code is harder to upgrade than PC code. You can change and improve your PC-based tool whenever you want, without having to make any changes to the embedded device.
If you want to look at variables—If your PC tool is able to read the ELF file generated by the linker, then it can find out a variable's location from the symbol table. Even better, read the DWARF debug data and know the variable's type as well. Then all you need is a "read-memory" protocol message on the embedded device to get the data, and the PC does the decoding and displaying.