I would like to debug a C++ program in VSCode. The problem is this program is run as part of a large and messy build system that spawns many processes and prepares input for the program. In other words if I run:
./do the_task
It will compile a load of C++, generate some input and eventually - through several layers of Bash, Python and Makefiles - run something like this:
/very/long/path/to/the_task --lots --of --arguments /very/long/path/to/generated_input.xml
I'd like to debug that process using VSCode/GDB, in such a way that I can
Set breakpoints in the_task.cpp
Just click "Start debugging" with the launch.json set to run do the_task
Unfortunately that doesn't work because by default GDB doesn't follow child processes. You can tell it to but then it will halt the parent process so that only a single process runs at a time. That causes everything to get stuck at some point in my case.
Some ideas I've had:
Is there a way to tell GDB to run a script when a new inferior (process) starts, check the executable name and then detach from the child if it doesn't match?
I could create a proxy GDB/MI wrapper that pretends to VSCode that the program has been started (so the connection doesn't time out), and then when we get to running /very/long/path/to/the_task prefix it with gdb --interpreter=mi and forward on all of the cached commands (to set breakpoints etc.) This seems like quite a lot of work and quite hacky so I'm not sure I like it.
Prefix /very/long/path/to/the_task with gdbserver. Then I can connect to it from VSCode. This is definitely the simplest and most obvious solution but the UX sucks - you have to manually start the command, then wait, then click "start debugging". Plus you're inevitably going to run into port reuse annoyances.
3, but write a custom VSCode extension that automatically starts debugging when it detects gdbserver has started. I've done this for Python debugging so it does work but there are some minor annoyances (e.g. if you restart VSCode and it restores a terminal session it doesn't work). Also it's a fair amount of work.
Is there an obvious solution I'm missing?
Related
I'm using CLion to develop a combined C / Rust program. The program spins off a number of processes that I need to debug. The normal way to do it is to start the main process, have it fork a process, then manually click Run / Attach to Process, search for the forked process, click it and then everything works fine.
This is slow and tedious. Is it possible to set an option somewhere that will automatically attach to a named process?
I'm using LLDB on a Mac.
Background:
Ruby script is packaged into an executable using OCRA 1.2.
Script is structured as follows:
begin
<some code that runs for a while>
ensure
<cleanup code>
end
Problem:
When I run the executable on Windows, it opens up a console window, and runs as usual. If I were to hit Ctrl-C, the cleanup code will run. But if I were to close the console window, the cleanup code doesn't run.
Is there anyway to ensure that the cleanup code would run, even in this scenario?
Side note: I am from a Java background, first time using Ruby.
Sort of. You need background processing, but unfortunately (1) under Windows IO.popen is not very reliable., and (2) even the windows "start /B" command is just going to run the code in a (shared) console.
So...if you really need this, and you need to see the output of your program, you'll want to install a Windows service. You could either put the critical code directly into the service or pass it the name of your executable & voila! It'll run in the background.
So the user would have to put some real effort into killing the app, and if you need the program's output to go to the console, you could have the service return the necessary text.
If you don't need to see output, you can run the app with rubyw.exe and suppress the console. Potentially you might have your app start a second .rb file using something like start rubyw my_app.rb, depending on your requirements.
Probably not the answer you wanted, but it should work. If you really, really need it to.
I have a GUI Ruby tool that needs to spawn a child command-line process, for example ping. If i do this on Windows, the console window will appear and dissapear for console process, that is very annoying. Is it possible to start a process from GUI Ruby script with no console window visible? If i use backtick operator or Kernel#system, the console window will appear, see example below:
require 'Tk'
require 'thread'
Thread.new { `ping 8.8.8.8` }
TkRoot.new.mainloop
The issue is that every executable on Windows is defined to be either a GUI executable or a Console executable (well, there's more detail than that but it doesn't matter here) at the time it is built. The executable that's running your Ruby script is a GUI executable (it also happens to use Tk to actually build a GUI, even if only a very simple one in your screenshot) and the ping executable is a Console executable. If a GUI executable starts a Console executable, a console is automatically created to run the executable in; you can't change this.
Of course, the picture is more complex than that. That's because a console application can actually work with the GUI (it just needs to do the right API calls) and you can use a whole catalogue of tricks to cause the console window to stay out of the way (such as starting ping through an appropriately-configured shortcut file) but such things are rather awkward. The easiest way is to have the console window be there the whole time by making Ruby itself be a console app (through naming your script with the .rb suffix, not .rbw). Yes, it doesn't really get rid of the problem, but it stops any annoying flashing.
If you were using ping as the purpose of your app (i.e., to find out if services were up) then I'd as whether it is possible/advisable to switch to writing the checking code directly in Ruby by connecting to the service instead of pinging it, as ping just measures whether the target OS kernel is alive, and not the service executable. This is a fine distinction, but I've seen machines get into a state where no executables were running but the machine was still responding to pings; this was very strange and can totally break your mental abstractions but can happen. But since you're only using ping as an example, I think you can just focus on the (rather problematic) console handling. Still, if you can do it without running a subprocess then definitely choose that method (on Windows; if you were on any sort of Unix you wouldn't have this problem at all).
It is indeed possible to spawn processes with Ruby. Here is a couple of ways to do it. I am not sure what you mean with
the console window will appear and dissapear for console process
but I think the best way for you to do it is to simply grab out and err and show it to your user in your own window. If you want the native windows console to appear wou probably need to something fancy with windows scripting.
One way to keep a spawned console alive is to have it run a batch file with a PAUSE command at the end:
rungping.bat:
ping %1
pause
exit
In your ruby file:
Thread.new {`start runping.bat 8.8.8.8`}
this is not yet another "I need a console in my GUI app" that has been discussed quite frequently. My situation differs from that.
I have a Windows GUI application, that is run from a command line. Now if you pass wrong parameters to this application, I do not want a popup to appear stating the possible switches, but I want that printed into the console that spawned my process.
I got that far that I can print into the console (call to AttachConsole(...) for parent process) but the problem is my application is not "blocking". As soon as I start it, the command prompt returns, and all output is written into this window (see attached image for illustration).
I played around a bit, created a console app, ran it, and see there, it "blocks", the prompt only re-appears after the app has terminated. Switching my GUI app to /SUBSYSTEM:Console causes strange errors (MSVCRTD.lib(crtexe.obj) : error LNK2019: nonresolved external Symbol "_main" in function "___tmainCRTStartup".)
I have seen the pipe approach with an ".exe" and a ".com" file approach from MSDEV but I find it horrible. Is there a way to solve this prettier?
This is not behaviour that you can change by modifying your application (aside from re-labelling it as already discussed). The command interpreter looks at the subsystem that an executable is labelled with, and decides whether to wait for the application to terminate accordingly. If the executable is labelled as having a GUI, then the command interpreter doesn't wait for it to terminate.
In some command interpreters this is configurable. In JP Software's TCC/LE, for example, one can configure the command interpreter to always wait for applications to terminate, even GUI ones. In Microsoft's CMD, this is not configurable behaviour, however. The Microsoft answer is to use the START command with the /WAIT option.
Once again: This is not the behaviour of your application. There is, apart from relabelling as a TUI program, no programmatic way involving your code to change this.
Maybe write a console-based wrapper app that checks the parameters, prints the error message on bad parameters, and calls/starts up the actual program when the parameters are correct?
I'm running a test script from batch file.
Because it is test, the programs are expected to fail once in a while. It is file as long as error code is returned so I can continue and mark specific test as failed.
However there is very annoying behavior of executable files under Microsoft Windows - if something fails it pop-ups window like:
This application has failed to start because foo.dll was not found, Re-installing the application may fix the problem
<OK>
Or even better:
The instruction at "..." referenced to memory at "..." ..
Click on OK to terminate the program
Click on CANCEL to debug the program
The result is known - the script execution blocks till somebody presses "Ok" button. And when we talk about automatic scripts that may run automatically at night in some headless virtual machine, it may be very problematic.
Is there a simple way to prevent such behavior and just make an application to exit with failure code - without changing the code of the program itself?
Is this possible at all?
The answer is following: You need to disable WER.
Simplest description for this I found at http://www.noktec.be/archives/259
Simply (ON XP): Right Click on My Computer > Advanced > Error Reporting > Disable
Voila - programs crash silently!
This does not solves problem when DLL is missing, but this is much rare case and this is good enough for me.
You can suppress AV's and such from showing a dialog box by running your application, or the script (the script engine, like cscript.exe), under a debugger.
Use Gflags.exe, or modify the registry directly, and set Image File Execution Options for the image in question. See this article for details on how to use the appropriate registry keys. You can set it up using a debugger commandline like "C:\Debuggers\ntsd.exe -g -G -c'command'", where you can pass commands to ignore certain types of exceptions in the -c"commmand" argument. This will effectively give you a tool to suppress interactive dialogs as a result of exceptions like AV, and will let the process continue (presumably to immediate end after the exception has occured).
This article explains the commands you can use to control exceptions and events from withing the debugger.
The -g and -G flags make sure that the process won't break into the debugger automatically during process start and end respectively. You'll have to play with the various exception suppression options to make sure that you 'eat' all possible first and second chance exceptiosn that might cause the process to break into the debugger.
Also, if you can tolerate a process being broken into the debugger (as against being stuck showing a dialog box), perhaps that would be a better option overall. You can evaluate each debug break in batch mode at a later time and decide which bugs you care to fix.
It is possible. We used to use IBM's Rational Robot product which could monitor the screen for specific items and, if found, send keystrokes to windows and other sorts of things.
We actually used it for fully automated unit and system testing, much like you're trying to do.
Now I thought that Robot has been through quite a few name changes so it may be hard to find but there it is, right on IBM's web page and with a free downloadable trial for you. It's not cheap, clocking in at a smidgeon under USD5,000 but it was worth it for us.
There's also TestComplete where you could get a licence for just unedr USD1,000 - it touts "Black-box testing - Functional testing of any Windows application" as one of its features and also has a downloadable demo to see if it's suitable before purchase.
However, you may be able to find another product to do the same sort of thing.
I initially thought of Expect but the ActiveState one seems to concentrate on console applications which leads me to believe it may not do graphics well.
The only other option I can suggest is to write your own program in VBScript. I've done this before to automate the starting of many processes (log on to work VPN, start mail, log in and so on) so I could be fully set up with one mouseclick instead of having to start everything manually.
You can use AppActivate to bring a window to the foreground and SendKeys to send arbitrary keypresses to it after that. It's possible you may be able to cobble together something from that if you want a cheaper solution.