I would like the command gdb on program X to instead switch to an existing debugging session of X if it already exists instead of signalling an error "This program is already being debugged" in gud-common-init.
I believe this is important as it makes the behaviour of gdb harmonize with the standard behaviour of most other Emacs interactions such as, find-file, switch-to-buffer etc, thus creating less confusion to the user.
So far I have modified the line containing
(error "This program is already being debugged"))
to instead do
(message "This program is already being debugged")
to at least prevent the error from arising. However, the function gdb does some extra initializations that should not be needed that causes some unnecessary delays. Is this a todo item or have I missed some gud/gdb-function that does this already?
Many thanks in advance,
Per Nordlöw
You can always rename-buffer. This is how I can run multiple gdb sessions on the same executable. It is not automatic but it is an effective work around.
For example if my executable is called pump, then upon running gdb, a buffer named *gud-pump* will be generated which represents the gdb session. From this buffer do meta-x rename-buffer *gud-pump1*
Then invoke gdb again and you will have two GUD sessions, one *gud-pump* and *gud-pump1*. The sessions are separate and should not interfere (although they can interact) with each other.
Related
This is my first question in Stack Overflow, since up until now, I always managed to find my answers.
So.. I'm writing a debbuger (for Windows, in python, using WinAppDbg library) that should trace the program execution, and encountered some problems.
I'm setting the trap flag, so I could stop every single step.
First problem - sometimes the execution flow goes through a Windows api, which goes to the kernel. When it returns, obviously the trap flag is off, and the execution of the thread may continue millions of instructions without my debbuger tracing every step of it.
Chance of solution - before a Windows api is called, I set the next addresses permissions as guard page, thus when the call returns, I get a guard page exception, setting the trap flag again, and continue tracing. But this cause a different problem (I call it "second problem")
Second problem - since I'm setting the trap flag of my main thread, all I see is a loop of that thread (I guess it's the Windows gui loop), and the program execution isn't advancing (for example, there should be new threads created, but I don't see them).
So what am I looking for?
A debugger's source code that can handle the problems I've described.
Or better yet, a solution to my problems. What am I doing wrong?
Thank you all!
I am performing very rapid file access in ruby (2.0.0 p39474), and keep getting the exception Too many open files
Having looked at this thread, here, and various other sources, I'm well aware of the OS limits (set to 1024 on my system).
The part of my code that performs this file access is mutexed, and takes the form:
File.open( filename, 'w'){|f| Marshal.dump(value, f) }
where filename is subject to rapid change, depending on the thread calling the section. It's my understanding that this form relinquishes its file handle after the block.
I can verify the number of File objects that are open using ObjectSpace.each_object(File). This reports that there are up to 100 resident in memory, but only one is ever open, as expected.
Further, the exception itself is thrown at a time when there are only 10-40 File objects reported by ObjectSpace. Further, manually garbage collecting fails to improve any of these counts, as does slowing down my script by inserting sleep calls.
My question is, therefore:
Am I fundamentally misunderstanding the nature of the OS limit---does it cover the whole lifetime of a process?
If so, how do web servers avoid crashing out after accessing over ulimit -n files?
Is ruby retaining its file handles outside of its object system, or is the kernel simply very slow at counting 'concurrent' access?
Edit 20130417:
strace indicates that ruby doesn't write all of its data to the file, returning and releasing the mutex before doing so. As such, the file handles stack up until the OS limit.
In an attempt to fix this, I have used syswrite/sysread, synchronous mode, and called flush before close. None of these methods worked.
My question is thus revised to:
Why is ruby failing to close its file handles, and how can I force it to do so?
Use dtrace or strace or whatever equivalent is on your system, and find out exactly what files are being opened.
Note that these could be sockets.
I agree that the code you have pasted does not seem to be capable of causing this problem, at least, not without a rather strange concurrency bug as well.
I'm pretty new to linux, and I have a few questions about kernel module programming. I'm using ubuntu and C to make my .ko files. I'm trying to make a module that will execute program /b instead of /a whenever program /a is called. any tips?
also, even with printk (KERN_EMERG...), it won't print to the terminal. is there a setting i'm missing, or does ubuntu not do that?
Thanks a lot!
You may need to fiddle with the settings in /proc/sys/kernel/printk, which controls which levels are printed to the console. From proc(5):
/proc/sys/kernel/printk
The four values in this file are console_loglevel,
default_message_loglevel, minimum_console_level, and
default_console_loglevel. These values influence
printk() behavior when printing or logging error
messages. See syslog(2) for more info on the
different loglevels. Messages with a higher priority
than console_loglevel will be printed to the console.
Messages without an explicit priority will be printed
with priority default_message_level.
minimum_console_loglevel is the minimum (highest)
value to which console_loglevel can be set.
default_console_loglevel is the default value for
console_loglevel.
Note that, like nice(2) values, the lower values have higher priorities.
The easiest way to make an execve() for path /foo/a to execute /foo/b is to bind-mount /foo/b on top of /foo/a:
mount -obind /foo/b /foo/a
No kernel module is required.
Doing this same task with a kernel module would be significantly more work; the LSM interface may provide you with some assistance in figuring out when exactly your target is being executed. If you're looking for a starting point, do_execve() in fs/exec.c is where to start reading. Be sure to have ctags installed, run, and know how to use your editor's ctags integration, to make reading the code significantly easier.
Answer about printk:
It prints to the console, not the terminal. But what's the console?
You can find its TTY in /proc/cmdline. Normally it's tty0, which means the screen connected to the computer.
If you connect via SSH/TELNET, surely you won't see this.
If you're working in a graphical environment (Gnome/KDE), you may need something like alt-F1/F2 to switch to a text mode TTY.
You can also use the dmesg command to see the messages.
This is homework. Tips only, no exact answers please.
I have a compiled program (no source code) that takes in command line arguments. There is a correct sequence of a given number of command line arguments that will make the program print out "Success." Given the wrong arguments it will print out "Failure."
One thing that is confusing me is that the instructions mention two system tools (doesn't name them) which will help in figuring out the correct arguments. The only tool I'm familiar with (unless I'm overlooking something) is GDB so I believe I am missing a critical component of this challenge.
The challenge is to figure out the correct arguments. So far I've run the program in GDB and set a breakpoint at main but I really don't know where to go from there. Any pro tips?
Are you sure you have to debug it? It would be easier to disassemble it. When you disassemble it look for cmp
There exists not only tools to decompile X86 binaries to Assembler code listings, but also some which attempt to show a more high level or readable listing. Try googling and see what you find. I'd be specific, but then, that would be counterproductive if your job is to learn some reverse engineering skills.
It is possible that the code is something like this: If Arg(1)='FOO' then print "Success". So you might not need to disassemble at all. Instead you only might need to find a tool which dumps out all strings in the executable that look like sequences of ASCII characters. If the sequence you are supposed to input is not in the set of characters easily input from the keyboard, there exist many utilities that will do this. If the program has been very carefully constructed, the author won't have left "FOO" if that was the "password" in plain sight, but will have tried to obscure it somewhat.
Personally I would start with an ltrace of the program with any arbitrary set of arguments. I'd then use the strings command and guess from that what some of the hidden argument literals might be. (Let's assume, for the moment, that the professor hasn't encrypted or obfuscated the strings and that they appear in the binary as literals). Then try again with one or two (or the requisite number, if number).
If you're lucky the program was compiled and provided to you without running strip. In that case you might have the symbol table to help. Then you could try single stepping through the program (read the gdb manuals). It might be tedious but there are ways to set a breakpoint and tell the debugger to run through some function call (such as any from the standard libraries) and stop upon return. Doing this repeatedly (identify where it's calling into standard or external libraries, set a breakpoint for the next instruction after the return, let gdb run the process through the call, and then inspect what the code is doing besides that.
Coupled with the ltrace it should be fairly easy to see the sequencing of the strcmp() (or similar) calls. As you see the string against which your input is being compared you can break out of the whole process and re-invoke the gdb and the program with that one argument, trace through 'til the next one and so on. Or you might learn some more advanced gdb tricks and actually modify your argument vector and restart main() from scratch.
It actually sounds like fun and I might have my wife whip up a simple binary for me to try this on. It might also create a little program to generate binaries of this sort. I'm thinking of a little #INCLUDE in the sources which provides the "passphrase" of arguments, and a make file that selects three to five words from /usr/dict/words, generates that #INCLUDE file from a template, then compiles the binary using that sequence.
My question is related to "Turn off buffering in pipe" albeit concerning Windows rather than Unix.
I'm writing a Make clone and to stop parallel processes from thrashing each others' console output I've redirected the output to pipes (as described in here) on which I can do any filtering I want. Unfortunately long-running processes now buffer up their output rather than sending it in real-time as they would on a console.
From peeking at the MSVCRT sources it seems the root cause is that GetFileType() is used to check whether the standard I/O handles are attached to a console, which then sets an internal flag and ends up disabling buffering.
Apparently a separate array of inheritable file handles and flags can also be passed on through the undocumented lpReserved2 member of the STARTUPINFO structured when creating the process. About the only working solution I've figured out is to use this list and just lie about the device type when setting the flags for stdout/stderr.
Now then... Is there any sane way of solving this problem?
There is not. Yes, GetFileType() tells it that stdout is no longer a char device, _isatty() return false so the CRT switches the output stream to buffered mode. Important to get reasonable throughput. Flushing output one character at a time is only acceptable when a human is looking at them.
You would have to relink the programs you are trying to redirect with a customized version of the CRT. I don't doubt that if that was possible, you wouldn't be messing with this in the first place. Patching GetFileType() is another un-sane solution.