On the build side, I have an assembly that extends MSBuild tasks to support compilation and linking with external tools (gcc, et al., as it happens). I press F7 and my MSBuild tasks get invoked.
Now I want to do the same thing for debugging. Ultimately, when I press F5, I want gcc (local) to run and connect to gdbserver (on another computer) using some properties I set for debugging (the IP or name of the remote, etc.)
WinGDB and VisualGDB do this (or something like it) so I know it is possible but they have limitations and/or eccentricities that make them hard for me to use as a replacement for my existing build support. And using them just for debugging is awkward, too (extra, redundant properties to set, etc.).
So, how can I invoke a program of my choice when I press F5? I found a fairly old article that talks about setting an external program as your debugger but can't seem to make that work in VS2010. In VS2010 I find Configuration Properties -> Debugging -> Debugger to Launch but the list doesn't seem extensible. Is there some plugin architecture to add to that list?
Update: A VSPackage may be the answer. MSDN says SVsDebugLaunch "Allows a VSPackage to support starting a debugger."
Related
I'm used to Eclipse for Java projects which automatically builds whenever I save a file. (I can turn this off.)
I then install Infinitest which automatically runs all tests affected by the change saved.
Hows do I do this for Visual Studio, writing C# software?
Important note:
If you're only concerned about C#/.NET code and only want to run Unit Tests, then this feature already exists in Visual Studio 2017 (Enterprise edition only) called Live Unit Testing, read more about it here: https://learn.microsoft.com/en-us/visualstudio/test/live-unit-testing-intro?view=vs-2017
Live Unit Testing is a technology available in Visual Studio 2017 version 15.3 that executes your unit tests automatically in real time as you make code changes.
My original answer:
(I used to be an SDE at Microsoft working on Visual Studio (2012, 2013, and 2015). I didn't work on the build pipeline myself, but I hope I can provide some insight:)
How can Visual Studio automatically build and test code?
It doesn't, and in my opinion, it shouldn't, assuming by "build and test code" you mean it should perform a standard project build and then run the tests.
Eclipse only builds what is affected by the change. Works like a charm.
Even incremental builds aren't instant, especially if there's significant post-compile activity (e.g. complicated linking and optimizations (even in debug mode), external build tasks (e.g. embedding resources, executable compression, code signing, etc).
In Eclipse, specifically, this feature is not perfect. Eclipse is primarily a Java IDE and in Java projects it's quite possible to perform an incremental build very quickly because Java's build time is very fast anyway, and an incremental build is as simple as swapping an embedded .class file in a Java .jar. In Visual Studio, for comparison, a .NET assembly build time is also fast - but not as simple, as the output PE (.exe/.dll) file is not as simple to rebuild.
However in other project types, especially C++, build times are much longer so it's inappropriate to have this feature for C/C++ developers, in fact Eclipse's own documentation advises C/C++ users to turn this feature off:
https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.cdt.doc.user%2Ftasks%2Fcdt_t_autobuild.htm
By default, the Eclipse workbench is configured to build projects automatically. However, for C/C++ development you should disable this option, otherwise your entire project will be rebuilt whenever, for example, you save a change to your makefile or source files. Click Project > Build Automatically and ensure there is no checkmark beside the Build Automatically menu item.
Other project types don't support this feature either, such as Eclipse's plugin for Go:
https://github.com/GoClipse/goclipse/releases/tag/v0.14.0
Changes in 0.14.0:
[...]
Project builder is no longer invoked when workspace "Build Automatically" setting is enabled and a file is saved. (this was considered a misfeature anyways)
(That parenthetical remark is in GoClipse's changelist and certainly makes clear that plugin's authors' opinions of Automatic Builds)
I then install Infinitest which automatically runs all tests affected by the change saved.
Visual Studio can run your tests after a build automatically (but you still need to trigger the build yourself), this is a built-in feature, see here:
https://learn.microsoft.com/en-us/visualstudio/test/run-unit-tests-with-test-explorer?view=vs-2017
To run your unit tests after each local build, choose Test on the standard menu, and then choose Run Tests After Build on the Test Explorer toolbar.
As for my reasons why Visual Studio does not support Build-on-Save:
Only the most trivial C#/VB and TypeScript projects build in under one second, the other project types, like C, C++, SQL Database, etc take between a few seconds for warm rebuilds of simple projects - to literally hours for large-scale C++ projects with lots of imported headers on a single-core CPU, low RAM and with a 5,400rpm IDE hard-drive.
Many builds are very IO-intensive (especially C/C++ projects with lots of headers *cough*like <windows.h>*cough*) rather than CPU-bound, and disk IO delays are a major cause of lockups and slowdowns in other applications running on the computer because a disk paging operation might be delayed or because they're performing disk IO operations on the GUI thread, and so on - so with this feature enabled in a disk IO-heavy build it just means your computer will jitter a lot every time you press Ctrl+S or whenever autosave runs.
Not every project type supports incremental builds, or can support a fast incremental build. Java is the exception to this rule because Java was designed so that each input source .java file maps 1-to-1 to an output .class file, this makes incremental builds very fast as only the actually modified file needs be rebuilt, but other projects like C# and C++ don't have this luxury: if you make even an inconsequential 1-character edit to a C preprocessor macro or C++ template you'll need to recompile everything else that used that template - and then the linker and optimizer (if code-inlining) will both have to re-run - not a quick task.
A build can involve deleting files on disk (such as cleaning your build output folder) or changing your global system state (such as writing to a non-project build log) - in my opinion if a program ever deletes anything under a directory that I personally own (e.g. C:\git\ or C:\Users\me\Documents\Visual Studio Projects) it had damn well better ask for direct permission from me to do so every time - especially if I want to do something with the last build output while I'm working on something. I don't want to have to copy the build output to a safe directory first. This is also why the "Clean Project" command is separate and not implied by "Build Project".
Users often press Ctrl+S habitually every few seconds (I'm one of those people) - I press Ctrl+S even when I've written incomplete code in my editor: things with syntax errors or perhaps even destructive code - I don't want that code built at all because it isn't ready to be built! Even if I have no errors in my code there's no way for the IDE to infer my intent.
Building a project is one way to get a list of errors with your codebase, but that hasn't been necessary for decades: IDEs have long had design-time errors and warnings without needing the compiler to run a build (thanks to things like Language Servers) - in fact running a build will just give me double error messages in the IDE's error window because I will already have error messages from the design-time error list.
Visual Studio, at least (I can't speak for Eclipse) enters a special kind of read-only mode during a build: so you can't save further changes to disk while a build is in progress, you can't change project or IDE settings, and so on - this is because a the build process is a long process that depends on the project source being in a fixed, known state - the compiler can't do its job if the source files are being modified while it's reading them! So if the IDE was always building (even if just for a few seconds) after each save, users won't like how the IDE is blocking them from certain editing tasks until the build is done (remember IDEs do more than just show editors: some specialized tool window might need to write to a project file just to open).
Finally, Building is not free of side-effects (in fact, that's the whole point!) - there is always a risk something could go wrong in the build process and break something else on your system. I'm not saying building is risky, but if you have a custom build script that does something risky (e.g. it runs a TRUNCATE TABLE CriticalSystemParameters) and the build breaks (because they always do) it might leave your system in a bad state.
Also, there's the (slightly philosophical) problem of: "What happens if you save incomplete changes to the build script of your project?".
Now I admit that some project types do build very, very quickly like TypeScript, Java and C#, and others have source-files that don't need compiling and linking at all and just run validation tools (like PHP or JavaScript) - and having Build-on-Save might be useful for those people - but arguably for the limited number of people whose experience it improves it demonstrably worsens it for the rest of the users.
And if you really want build-on-save, it's trivial enough to write as an extension for Visual Studio (hook the "File Save" command and then invoke the Project Build command in your handler) - or get into the habit of pressing Ctrl+B after Ctrl+S :)
VS 2017 Enterprise Edition supports a Live Unit Testing feature. For older versions, or lower editions, some third party providers such as Mighty Moose or NCrunch are available (other third party solutions almost certainly exist also)
Ever since I started using ReSharper it's never been clear to me how I can step into my own external sources. Sometimes it's working, but most of the times it is not.
As my frustrations are at its peak I would like to figure out how this works once and for all.
I have two C# solution files (one for my Framework and one for my Platform). I am using code from my Framework in my Platform solution through Nugets.
Both solutions are located on my disk (C:\<project>\framework and C:\<project>\platform). The Framework solution contains several projects (e.g. Framework.Core and Framework.Logging).
When I am debugging my Platform solution I cannot navigate into a method (F11) that is called on one of my Framework components.
As said, this has been working fine for me in the past but now it's not working anymore and I cannot find the solution.
Thanks for your help!
ReSharper doesn't control anything about stepping into external source while debugging. The options in your screenshot control navigating into external source from standard ReSharper navigation commands (go to type, find usages, etc).
In order to debug external sources, you'll need to make sure you have access to the .pdb files for your external code. This must either be side-by-side with the assembly, or available in the symbol cache, or downloaded from a symbol server.
I using dotPeek v1.2 with VS2013 Update 3 to attempt to debug a referenced C# .dll's code. I've followed all the directions from the following tutorial: Using dotPeek as a Symbol Server (http://localhost:33417/ is set as symbol location, etc.) To be honest I've read a bunch of articles like this and this, combed through all the required settings, and really haven't seen 1 working solution of debugging a 3rd party non-framework .dll so I'm not convinced this is a fully working product from this aspect.
Regardelss, dotPeek should allow once the symbol server is started to step into and debug code from 3rd party assemblies from VSNET according to their documentation. I know which .dlls to select for dotPeek because I inspected their path from Debug -> Windows -> Modules.
I happen to have ReSharper also installed which allows me to decompile when I right click a line of code and select 'Go to Declaration'. The problem is it appears the symbol server isn't doing anything to assist in serving up the code at debug time. Rather the decompiled source provided by ReSharper seems to be what VS.NET wants to jump into. The problem is I always get the following error:
Source file:
C:\Users\username\AppData\Local\JetBrains\ReSharper\v8.2\SolutionCaches_ReSharper.Meijer.Ecommerce.Nav.WebAppServices.-382002776\Decompiler\decompiler\53\66e7ccc2\MyClass.cs
Module: C:\Projects\MyProject\bin\Debug
Process: [24808] vstest.executionengine.x86.exe
The source file is different from when the module was built. Would you
like the debugger to use it anyway?
If I say, 'yes' and step in the debugger appears to be on lines that don't exist in the file and is out of sync. This makes sense as it is showing the .cs class from the 'Source File' location but has the .dll loaded from the /bin/Debug
However, I don't understand why this is happening anyway as dotPeek should be serving up the loaded symbols from the /bin/Debug and not be trying to step into any decompiled source ReSharper had presented.
How do I configure this so VS2103 will actually debug the symbols and code served up from dotPeek?
One big gotcha is that you need to make sure you have a valid path set for the cache directory in the Tools -> Options -> Debugging -> Symbols page.
Also, on the main Debugging options page (Tools -> Options -> Debugging -> General) make sure you:
Uncheck the "Enable Just My Code" option
Uncheck the "Enable .NET Framework source stepping" option
Check the "Enable source server support" option
Uncheck the "Require source files to exactly match the original version
It's also worth checking the "Print source server diagnostic messages to the Output window" option, and checking the output window when trying to step into 3rd party code. It should hopefully point to any issues.
This is how I've got things set up, and I can debug 3rd party dlls (obviously, dotPeek needs to have the .dll loaded in the assembly explorer before you start debugging, too).
We have a DLL which provides the data layer for several of our projects. Typically when debugging or adding a new feature to this library, I could run one of the projects and Step Into the function call and continue debugging code in the DLL project. For some reason, that is no longer working since we switched to Visual Studio 2008... It just treats the code from the other project as a DLL it has no visibility into, and reports an exception from whatever line it crashes on.
I can work around that by just testing in the DLL's project itself, but I'd really like to be able to step in and see how things are working with the "real" code like I used to be able to do.
Any thoughts on what might have happened?
Is the pdb file for the dll in the same directory as the dll? This should all work -- I do just this on a regular basis. Look in the Modules window which will show you whether it's managed to load symbols for the dll. If it hasn't then you won't be able to step into functions in that dll.
It sounds like you have "Just My Code" enabled and VS is considering the other projects to not be your code. Try the following
Tools -> Options -> Debugger
Uncheck "Just my Code"
Try again
I've gotten around this issue by opening a class that will be called in the project you need, placing a breakpoint, keep the file open, and run the debugger. The debugger will hit the breakpoint and the relative path that VS uses will be updated so that future classes will be opened automagically.
I would like to use Visual Studio 2008 to the greatest extent possible while effectively compiling/linking/building/etc code as if all these build processes were being done by the tools provided with MASM 6.11. The exact version of MASM does not matter, so long as it's within the 6.x range, as that is what my college is using to teach 16-bit assembly.
I have done some research on the subject and have come to the conclusion that there are several options:
Reconfigure VS to call the MASM 6.11 executables with the same flags, etc as MASM 6.11 would natively do.
Create intermediary batch file(s) to be called by VS to then invoke the proper commands for MASM's linker, etc.
Reconfigure VS's built-in build tools/rules (assembler, linker, etc) to provide an environment identical to the one used by MASM 6.11.
Option (2) was brought up when I realized that the options available in VS's "External Tools" interface may be insufficient to correctly invoke MASM's build tools, thus a batch file to interpret VS's strict method of passing arguments might be helpful, as a lot of my learning about how to get this working involved my manually calling ML.exe, LINK.exe, etc from the command prompt.
Below are several links that may prove useful in answering my question. Please keep in mind that I have read them all and none are the actual solution. I can only hope my specifying MASM 6.11 doesn't prevent anyone from contributing a perhaps more generalized answer.
Similar method used to Option (2), but users on the thread are not contactable:
http://www.codeguru.com/forum/archive/index.php/t-284051.html
(also, I have my doubts about the necessity of an intermediary batch file)
Out of date explanation to my question:
http://www.cs.fiu.edu/~downeyt/cop3402/masmaul.html
Probably the closest thing I've come to a definitive solution, but refers to a suite of tools from something besides MASM, also uses a batch file:
http://www.kipirvine.com/asm/gettingStarted/index.htm#16-bit
I apologize if my terminology for the tools used in each step of the code -> exe process is off, but since I'm trying to reproduce the entirety of steps in between completion of writing the code and generating an executable, I don't think it matters much.
There is a MASM rules file located at (32-bit system remove (x86)):
C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\VCProjectDefaults\masm.rules
Copy that file to your project directory, and add it to the Custom Build Rules for your project. Then "Modify Rule File...", select the MASM build rule and "Modify Build Rule...".
Add a property:
User property type: String
Default value: *.inc
Description: Add additional MASM file dependencies.
Display name: Additional Dependencies
Is read only: False
Name: AdditionalDependencies
Property page name: General
Switch: [value]
Set the Additional Dependencies value to [AdditionalDependencies]. The build should now automatically detect changes to *.inc, and you can edit the properties for an individual asm file to specify others.
You can create a makefile project. In Visual Studio, under File / New / Project, choose Visual C++ / Makefile project.
This allows you to run an arbitrary command to build your project. It doesn't have to be C/C++. It doesn't even have to be a traditional NMake makefile. I've used it to compile a driver using a batch file, and using a NAnt script.
It should be fairly easy to get it to run the MASM 6.x toolchain.
I would suggest to define Custom Build rules depending on file extension.
(Visual Studio 2008, at least in Professinal Edition, can generate .rules files, which can be distributed). There you can define custom build tools for asm files. By using this approach, you should be able to leave the linker step as is.
Way back, we used MASM32 link text as IDE to help students learn assembly. You could check their batchfiles what they do to assemble and link.
instead of batch files, why not use the a custom build step defined on the file?
If you are going to use Visual Studio, couldn't you give them a skeleton project in C/C++ with the entry point for a console app calling a function that has en empty inline assembly block, and let them fill their results in it?
Why don't you use Irvine's guide? Irvine's library is nice and if you want, you can ignore it and work with Windows procs directly. I've searching for a guide like this, Irvine's was the best solution.