using insight or ddd with working gdb session - debugging

I have the script that pre-configures host, runs sh4-linux-gdb, connects to target and pre-configures it. After script execution I get regular (gdb) prompt.
Is there any way to "bind" this output to insigt or ddd, and control this instance via gui ?
I have tried different solution, but I can not success. I wouldn`t like to analyse the script. It is complex, nested, for different platforms and different projects.
Thank you

For Insight the answer is definitely no. Insight is linked into gdb, it does not run a separate gdb. So once you have started gdb, it is too late.
For DDD the answer is, "maybe, but only with extreme difficulty". I don't believe there is any pre-canned way to do it. You could maybe accomplish it if you were desperate by playing games with changing the current gdb's controlling terminal to something set up by DDD.
I would say that maybe you're better off either just changing this script to invoke DDD (or a newer GUI -- DDD is quite old and uses the wrong approach to gdb interaction) directly. You probably don't need to understand the whole script to accomplish this.

Related

working effectively with gdb

Whenever possible, I usually tend to learn keyboard shortcuts. It's really amazing to see an experienced coder works with VI effectively.
I've been trying for sometime switching to debugging with gdb, instead of eclipse debugger (based on gdb)
And I still find it hard to actually navigate through the code, inspecting variables, etc.
Actually, I have never seen an experienced gdb user, so I'm wondering... does it worth it? Is it possible to work effectively with gdb ?
Note: I also tried cgdb, which is a curses extension of gdb. It's better, but I still feel that its still not effective enough...
GDB has a curses interface, which can be activated via the command line option -tui.
This interface has a single key mode, which makes the most common operation available with a single keystroke. If additionally you make use of automatic command execution e.g. to display variable values, when a breakpoint is reached, then this is as comfortable and quick as it gets. But if you use Eclipse anyway, I see no point in avoiding the Eclipse UI for gdb.
I used GDB inside emacs for some time, but found the time to transfer information between GDB and emacs unacceptable, so I switched to this TUI mode mentioned above. I don't know, if the transfer of information between GDB and Eclipse is faster, but at least startup time of complex programs might be much better in GDB directly than in Eclipse.
You could also try the ddd debuger:
http://www.gnu.org/software/ddd/
This question didn't get much attention, although a bounty was offered.
So I decided to investigate further the issue for myself.
Finally I stumbled upon a solution which I think can be quite effective.
It's called tmux, and it's basically similar to gnu screen.
This tool allows splitting the console to several panes, each containing different process.
Therefore it's possible to have a single window with gdb and emacs.
switching between the windows is very easy using a dedicated hot key.

Any recommendations for writing a GUI client from a CLI client?

I'm looking for writing a GUI client for a existing application in my job, this application is CLI and because this is not widely used.
This is the first time I'm writing something similar, the I ask you for recommendations, books, techniques, methodologies, advices. My first approach is to create the interface and to make calls to the original CLI client, is this a congruent approach?
Though it's not ideal, I don't think it's a bad approach, creating a GUI shell for your CLI app. In this design, the GUI acts as the CLI program's user. You have to consider things like:
Can the GUI anticipate or understand
all possible CLI program output? How about errors? How
complex will that be? Consider
parsing Unix "ls" output. Simple enough. How
about Windows command prompt "dir" output? A
bit more funky.
The CLI program may take time to
execute, this must be presented in
the GUI. The GUI may have to prevent
the user from running another instance of the CLI.
You might want to consider tcl/tk. I've written several successful commercial GUIs that work in exactly this manner.
I'll admit that it maybe takes a little skill to craft a stunning GUI but it's not impossible, and not even that hard. You won't be able to reproduce the eye candy of flash or silverlight, and if that is important this might not be the right solution for you.
If, on the other hand, you are more concerned with Getting The Job Done, tcl is a very viable choice. It's easy to learn and easy to I integrate with command line tools.

How can I determine if my process is being run interactively?

Is there a standard(ish) POSIX way of determining if my process (I’m writing this as a Ruby script right now; but I’m curious for multiple environments, including Node.js and ISO C command-line applications) is being run in an interactive terminal, as opposed to, say, cron, or execution from another tool, or… so on and so forth.
Specifically, I need to acquire user input in certain situations, and I need to fail fatally if that is determinably not possible (i.e. being run by cron.) I can do this with an environment variable, but I’d prefer something more standard-ish, if I can.
I've always used $stdout.isatty to check for this. Other approaches might include checking the value of ENV['TERM'] or utilizing the ruby-terminfo gem.
The closest you'll get, AFAIK, is isatty(). But as the page itself says, nothing guarantees that a human is controlling the terminal.
As #Mitch wrote it in a comment, which can also be useful to some people:
For anyone who came here looking for windows, try System.Environment.UserInteractive for .Net or GetUserObjectInformation for win32, which will fail for non-interactive processes

Is it possible to recover keyboard input that was done while Mac OS was starting up?

I wonder if it is possible to figure out what keys user was pressing while his Mac OS was starting up?
Any way will do. As far as I understand it there is no easy way to simply hook an app/script to start working and capturing keystrokes simultaneously along with the OS. But maybe there is a way to some kind of reverse engineer this? Maybe looking into a specific log file or something like that?
Any results will do. Basically what I'm interested in is in finding out, which key the user pressed/held during the OS startup. It may be string, a character code or a hex, doesn't really matter.
UPDATE: guided by Pekka's advice I've found a kernel extension that should do the job. And it, hopefully, will do it, after this follow-up question - Why this keyboard intercepting kernel extension doesn’t work? is answered. :)
I'm no OS guru, but I think very, very, very hardly. I don't suppose stuff like this is automatically recorded anywhere.
I guess you would have to look whether the part of the system that handles the startup keys is somehow accessible, and can be extended to invoke a command defined by you.
The second best thing that comes to mind is for you to write some sort of custom device driver or startup script that gets loaded at startup, and listens to keypress events.
How to approach this depends completely on what point in the boot process you want to check for keys.
If you want to check really early, your only choice is to play with the EFI (firmware) environment -- maybe you could modify rEFIt to do what you want?
After the firmware, control passes to boot.efi (BootX on PPC Macs). This could presumably be replaced/hacked, and I'd expect the source to be available from as part of Darwin, but I don't see it on a quick inspection.
After that, the kernel loads (you could build your own kernel) with a minimal set of cached drivers (you could write a driver, not sure how to get it to be cached, though).
After that, all sorts of things happen more or less at once. Normal drivers get loaded, /etc/rc.local gets run, launchd items in /System/Library/LaunchDaemons and Library/LaunchDaemons become active... If you're willing to wait until this phase of the boot process, you have many options.
It's not just not recorded anywhere, for quite a while during startup there is no keyboard driver. So from the point of view of software, during that interval the keyboard simply doesn't exist.

When should I add a GUI?

I write many scripts at home and on the job. Most of the time the scripts get used only a few times to accomplish their chosen task and then are never used again. However, sometimes I write a script to do something more complicated, something that requires user input. It is at this point that I usually agonize over whether to implement a GUI or stick with a y/n, press 1-10, etc. command-line interface. This type of interface can become tedious to use and difficult to maintain.
I know some things lend themselves to a GUI more than others, such as selecting things in a giant list. However, the time it takes to switch a command-line application to use a GUI is prohibitive. For me, it takes a good amount of time to add a GUI with even the most simple framework I can find.
I am curious if any developers have a method of determining at what point their script has grown enough to need a GUI. Or am I going about this the wrong way, should I always be writing my scripts assuming I might later add a GUI?
This doesn't answer your question but FWIW an intermediate step, between UI and command-line, is to have a configuration file instead of a UI:
Edit the configuration file
Run the program
A configuration file format can, if necessary, be complicated and well-commented.
As with many questions of this type, the answer is that it depends.
If your program/script does just one single thing by receiving a number of inputs from the user, it is better to stick with the non-GUI mode.
If the application is doing more than one thing and if you think that the user will use the application to do a lot of stuff, you may consider using a GUI.
Are you planning to distribute this program to others? Then it is better to provide a GUI.
If the users are non-technical, a GUI is a must!
Thats it.
When you want to hand your stuff over to someone else in a discoverable way. Command-line scripts are awesome because they are simple and elegant, but they are not very discoverable. That is, if you were to hand your scripts over to someone else with no documentation, would they be able to figure out what they are and how to use them? If your tasks are so simple that myscript /? will explain what you need to do fully, then you don't need a GUI.
If on the other hand, you are handing your scripts over to someone who isn't so technical, or needs some more visual guidance about the task to be done, than by all means, a GUI is a good way to go. You might even want to keep your scripts as they are and just create a separate GUI that runs them for maximum flexibility.
I think this decission also depends on the audience who will be using your script: If it is people who are comfortable working with the command line, then there is not pressing need to add a GUI as long as your script has a good /help which explains all the parameters it accepts. But if you want the "average user" to be able to use your program, I'd rather add a GUI because otherwise your program might not be intuitive enough for that user group.
If you only need some "Dialogs" to improve your scripts, you can use KDE Kdialog or Gnome Zenity.
I can't count the number of times I've written what I thought would be a 'one-off' and it became more useful than I thought and ended up writing a GUI for it, or I've need to come back to use a program months later. The advantage of the GUI is it makes it easier to remember what would otherwise likely be command line arguments. I.e. for flags and options you can simply use check boxes, combo boxes, radio buttons, and file selectors filenames. I use Borland C++ RAD so it is quite quick and easy to throw together a simple (or even not so simple) dialog box. I now often start with creating the GUI.
If you use Linux, try Zenity. It's an easy to use tool to make a GUI for command-line programs.

Resources