Is it possible to extend Intellitrace events? - visual-studio-2010

Specifically, what I'd like to do is raise new events from my apps and libraries, similar to those exposed by ADO.NET.
Real life scenario: a patch for NHibernate that shows executed queries even when they are cached (and, therefore, don't reach the ADO.NET layer)
I found a lot of documentation about using Intellitrace and intellitrace, but none about generating it.
Is this even possible? Or is everything hardcoded in the guts of VS?

Check out this example to see how you can define your own IntelliTrace Events.

Related

Paradox (ObjectPal) Application causing General Protection Violations sporadically, looking for the Reason

we have a pretty big application based on paradox / objectpal. since we moved the database from filebased tables (paradox) to ms sql 2008 express edition, we encounter lots of general protection violations (GPV) which appear sporadically. these errors seem to occur only with the paradox runtime, not with the developement edition, making debugging impossible. we did a lot to minimize those GPVs and it looks like its getting better. anyway, here and there are still annoying GPVs that crash the whole application.
so, what i´m looking for is kind of a debugger / logger for windows, to see what operations / methods cause these errors. like the windows event log, but with more details that could give a hint what and where to look for. i´m not sure if such a tool even exists... .
I can think of two things you might try.
(1) Check with these guys
http://pnews.thedbcommunity.com/cgi-bin/dnewsweb.exe
on the subject of GPV (GPF) with the runtime but not with the development platform. I'm sure the your question has come up there already.
Try searching the newsgroups first, but if that fails, your question probably belongs under "pnews.paradox-development".
(2) Add logging code to the application itself. Add a library object to encapsulate an event log file, with a custom method to report an event.
Begin with a call from the open() and close() events of each design object (form, script, report, etc). Then add a call to the action() method of any suspicious objects to detect and log specific actions.
This is tedious, I know, because you have to add the library to the Var() and Open() methods of every design object in the application. But if it is done correctly, the operation of your application becomes amazingly transparent.

Hidden features of couchdb - related to debugging views

I've been wondering if there are any hidden / not well known features of couchdb?
We have had to debug map/reduce functions related to views and it is quite a pain to do so (no step by step debugging etc).
We have found links such as How to console log in couchdb but wondering if someone has found any more efficient ways and features.
There is no any of such features for debugging in CouchDB, except Log Driven Debugging approach. However, if you'll not limit yourself by only default CouchDB distribution, you may found useful to make mocked version of query server or interact with him directly like this ruby test case does or even switch to nodejs query server to debug views right in browser - there are many options as you see.
I have some pretty hefty views (1000+ rows of code). As I could not find a decent debugging framework. I stopped debugging in couchdb altogether.
My view docs have a common library view called (_lib). This is accessed form other views in the same doc.).
Initially I used a combination of the Kanape IDE and some emit's that were only triggered if I had set a debug flag.
Now I have moved the complete framework to Webstorm, where I debug using jasmine as well as profile using spy.js. It allowed me to find bottlenecks and have pretty fast views even across databases with a size of around 25 GByte.

A good way to build javascript profiler for Mozilla Firefox

I'm working on a javascript profiler for Mozilla Firefox, that would let me obtain all available information about the execution of the script on the page (DOM object calls, events, calls to functions like Math.random(), document and navigator object calls, as well as code's own execution tree with arguments etc etc).
Currently, I think that the best way to implement this sort of profiler is by modifying Firefoxe's own source code.
One way to go about it is to find all implementations for corresponding method calls and add profiler log calls there. But there are 2 problems with this approach:
The methods and objects are widely scattered, and I'm not really familiar with the source code at this moment. Tracking down all the functions and making sure that the profiler works as intended will take A LOT of time
When created in this way, the profiler is going to be difficult to maintain when Firefox source code evolves with time.
So I was wondering, if there is a single class/a small group of key classes in firefox source, that could be modified to allow me to collect the information I want? Or is there a better way of doing what I need to do?
The latest Aurora release of Firefox has a basic profiler built into its developer tools, or you can download a more advanced interface from the Mozilla Add-ons site which works with Firefox 16 or later.

Is System.AddIn mostly about making it easier to use Remoting or does it make it harder to do so?

It takes at least 7 assemblies and restricting my AddIn's data model to data types that remoting can deal with before the appdomain isolation features begin to work. It is so complex! The System.AddIn teams blog implies to me they were trying to re-create a mental model of COM, a model I never understood very well in the first place and am not sold on the benefits. (If COM is so good why's it dead?-rhetorical question.) If I don't need to mirror or interop with legacy COM (like VSTO does using System.AddIn), is it possible to just create some classes that load load in a new AppDomain?
I can write the discovery code my self, I've done it before and a naive implementation is pretty fast because I'm not like iterating over the assemblies in the GAC!
So my specific question is, can I get the AppDomain isolation that AddIns provide with a few code Remoting snippets, and what would those be?
I'm not entirely sure that that any answer to your question meets the terms of the site - there is no solution.
Yes, remoting is easier as it is done for you. However, it is highly controlled and as you identified, requires a little work to plumb it all together. The cache file spewed out by the discovery process is hardly welcome either.
System.AddIn excels at isolation, which is actually a bit of an arse to put together from scratch in a robust, flexible way. It supports cross process hosting and fairly simple passage of user WPF elements from one domain to another.
One thing to remember however is that MAF's target audience is not those who are trying to connect two applications together. It is targeting developers wanting pluggable yet secure systems (cross process hosting protects the root application from unhandled exceptions, appdomains allow for executing potentially foreign code with defined security). From most communication, direct yourself straight towards System.Runtime.Remoting or WCF.
If you want to continue with System.AddIn, consider the pipeline builder plugin for visual studio!
In conclusion - you can get System.AddIn isolation using Remoting but to get a decent system you will require more than a few snippets. I am trying to replicate it myself and am tripping up all over remote interface component - something System.AddIn does without a hitch.
After messing around with System.Add for quite a while, I'm convinced that it was added as a one-off special purpose solution for Microsoft use. I'm surprised it got elevated to a core part of the .NET framework. It doesn't seem to have the refinement and polish needed for a general .NET framework component.
I'd like to find an alternative way to create .NET managed add-ins that doesn't require so much effort.

How to consistently organize code for debugging?

When working in a big project that requires debugging (like every project) you realize how much people love "printf" before the IDE's built-in debugger. By this I mean
Sometimes you need to render variable values to screen (specially for interactive debugging).
Sometimes to log them in a file
Sometimes you have to change the visibility (make them public) just to another class to access it (a logger or a renderer for example).
Some other times you need to save the previous value in a member just to contrast it with the new during debugging
...
When a project gets huge with a lot of people working on it, all this debugging-specific code can get messy and hard to differentiate from normal code. This can be crazy for those who have to update/change someone else's code or to prepare it for a release.
How do you solve this?
It is always good to have naming standards and I guess that debug-coding standards should be quite useful (like marking every debug-variable with a _DBG sufix). But I also guess naming is just not enough. Maybe centralizing it into a friendly tracker class, or creating a robust base of macros in order to erase it all for the release. I don't know.
What design techniques, patterns and standards would you embrace if you are asked to write a debug-coding document for all others in the project to follow?
I am not talking about tools, libraries or IDE-specific commands, but for OO design decisions.
Thanks.
Don't commit debugging code, just debuggin tools.
Loggin OTOH has a natural place in execption handling routines and such. Also a few well placed logging statments in a few commonly used APIs can be good for debugging.
Like one log statment to log all SQL executed from the system.
My vote would be with what you described as a friendly tracker class. This class would keep all of that centralized, and potentially even allow you to change debug/logging strategies dynamically.
I would avoid things like Macros simply because that's a compiler trick, and not true OO. By abstracting the concept of debug/logging, you have the opportunity to do lots of things with it including making it a no-op if needed.
Logging or debugging? I believe that well-designed and properly unit-tested application should not need to be permanently instrumented for debugging. Logging on the other hand can be very useful, both in finding bugs and auditing program actions. Rather than cover a lot of information that you can get elsewhere, I would point you at logging.apache.org for either concrete implementations that you can use or a template for a reasonable design of a logging infrastructure.
I think it's particularly important to avoid using System.outs / printfs directly and instead use (even a custom) logging class. That at least gives you a centralized kill-switch for all the loggings (minus the call costs in Java).
It is also useful to have that log class have info/warn/error/caveat, etc.
I would be careful about error levels, user ids, metadata, etc. since people don't always add them.
Also, one of the most common problems that I've seen is that people put temporary printfs in the code as they debug something, and then forget where they put them. I use a tool that tracks everything that I do so I can quickly identify all my recent edits since an abstract checkpoint and delete them. In most cases, however, you may want to pose special rules on debug code that can be checked into your source control.
In VB6 you've got
Debug.Print
which sends output to a window in the IDE. It's bearable for small projects. VB6 also has
#If <some var set in the project properties>
'debugging code
#End If
I have a logging class which I declare at the top with
Dim Trc as Std.Traces
and use in various places (often inside #If/#End If blocks)
Trc.Tracing = True
Trc.Tracefile = "c:\temp\app.log"
Trc.Trace 'with no argument stores date stamp
Trc.Trace "Var=" & var
Yes it does get messy, and yes I wish there was a better way.
We routinely are beginning to use a static class that we write trace messages to. It is very basic and still requires a call from the executing method, but it serves our purpose.
In the .NET world, there is already a fair amount of built in trace information available, so we do not need to worry about which methods are called or what the execution time is. These are more for specific events which occur in the execution of the code.
If your language does not support, through its tracing constructs, categorization of messages, it should be something that you add to your tracing code. Something to the effect that will identify different levels of importance and/or functional areas is a great start.
Just avoid instrumenting your code by modifying it. Learn to use a debugger. Make logging and error handling easy. Have a look at Aspect Oriented Programming
Debugging/Logging code can indeed be intrusive. In our C++ projects, we wrap common debug/log code in macros - very much like asserts. I often find that logging is most usefull in the lower level components so it doesn't have to go everywhere.
There is a lot in the other answers to both agree and disagree with :) Having debug/logging code can be a very valuable tool for troubleshooting problems. In Windows, there are many techniques - the two major ones are:
Extensive use of checked (DBG) build asserts and lots of testing on DBG builds.
the use of ETW in what we call 'fre' or 'retail' builds.
Checked builds (what most ohter call DEBUG builds) are very helpfull for us as well. We run all our tests on both 'fre' and 'chk' builds (on x86 and AMD64 as well, all serever stuff runs on Itanium too...). Some people even self host (dogfood) on checked builds. This does two things
Finds lots of bugs that woldn't be found otherwise.
Quickly elimintes noisy or unnessary asserts.
In Windows, we use Event Tracing for Windows (ETW) extensively. ETW is an efficient static logging mechanism. The NT kernel and many components are very well instrumented. ETW has a lot of advantages:
Any ETW event provider can be dynamically enabled/disabled at run time - no rebooting or process restarts required. Most ETW providers provide granular control over individual events, or groups of events.
Events from any provider (most importantly the kernel) can be merged into a single trace so all events can be correlated.
A merged trace can be copied off the box and fully processed - with symbols.
The NT kernel sample pofile interrupt can generate an ETW event - this yeilds a very light weight sample profiler that can be used any time
On Vista and Windows Server 2008, logging an event is lock free and fully multi-core aware - a thread on each processor can independently log events with no synchronization needed between them.
This is hugly valuable for us, and can be for your Windows code as well - ETW is usuable by any component - including user mode, drivers and other kernel components.
One thing we often do is write a streaming ETW consumer. Instead of putting printfs in the code, I just put ETW events at intersting places. When my componetn is running, I can then just run my ETW watcher at any time - the watcher receivs the events and displays them, conts them, or does other interesting things with them.
I very much respectfully disagree with tvanfosson. Even the best code can benefit from well implemented logging. Well implimented static run-time logging can make finding many problems straight forward - without it, you have zero visiblilty into what is happening in your component. You can look at inputs, outputs and guess -that's about it.
They key here is the term 'well implimented'. Instrumentation must be in the right place. Like any thing else, this requries some thought and planning. If it is not in helpfull/intersting places, then it will not help you find problems in a a development, testing, or deployed scenario. You can also have too much instrumeation causing perf problems when it is on - or even off!
Of course, different software products or componetns will have different needs. Some things may need very little instrumenation. But a widely depolyed or critical component can greatly benefit from weill egineered instrumeantion.
Here is a scenario for you (note, this very well may not apply to you...:) ). Lets say you have a line-of-business app deployed on every desktop in your company - hundreds or thousands of users. What do you do when someone has a pbolem? Do yo stop by their office and hookup a debugger? If so, how do you know what version they have? Where do you get the right symbols? How do you get the debuger on their system? What if it only happens once every few hours or days? Are you going to let the system run with the debugger connected all that time?
As you can imagine - hooking up debugger in this scenario is disruptive.
If your component is instrumented with ETW, then you could ask your user to simply turn on tracing; continue to do his/her work; then hit the "WTF" button when the problem happens. Even better: your app may have be able to self log - detecting problems at run time and turning on logging auto-magicaly. It could even send you ETW files when problems occured.
These are just simple exmples - logging can be handled many different ways. My key recomendation here is to think about how loging might be able to help you find, debug, and fix problems in your componetns at dev time, test time, and after they are deployed.
I was burnt by the same issue in about every project I've been involved with, so now I have this habit that involves extensive use of logging libraries (whatever the language/platform provides) from the start. Any Log4X port is fine for me.
Building yourself some proper debug tools can be extremely valuable. For example in a 3D environment, you might have an option to display the octree, or to render planned AI paths, or to draw waypoints that are normally invisible. You'd probably also want some on-screen display to aid with profiling too: the current framerate, count of polygons on screen, texture memory usage, and so on.
Although this takes some time and effort to do, in the long run it can save you a lot of time and frustration.

Resources