How to break on Insert in Visual Studio / SQL Server 2005 - visual-studio

II'd like to use Visual Studio to break whenever a record is inserted into a certain table, so I can see the values being inserted and the call stack from that moment. Is that possible, or am I stuck with stored procedure debugging only?

Well, since you're using SqlServer, why not just use Profiler? Set a trace, and you can watch the values insert there.... You can set up the breakpoint in Visual Studio, or you can just set it as a transaction that rolls back, then go through the trace to find the values that would have gone in.
If you haven't used profiler before, it's very easy and should do what I think you're looking to pull off.

Depending on how you are writing your code that is actually performing the database inserts, you could set a breakpoint on the function/sub that is being called and step through it to see the values that are getting passed through, but we would probably need to see more specifically how you are actually performing your database operations in your code.
Edit: As has been said, if you stay out of visual studio, using the SQL Server Profiler is probably your best option.

More effort than it is worth but is possible, could prob create a trigger on the table and use this method http://support.microsoft.com/kb/316549
But like everyone is suggesting, break on the .net code that does the insert or use sql profiler is much, much easier and reliable.

Related

Visual Studio extension to dump object to file during debug?

I'm debugging something (in Visual Studio 2017 and 2019) where, somewhere, a number's changing between an old version of code and a new version. It'd be useful to be able to recursively dump the values of fields in an object from the debugger for comparison. Especially where the dump is large and can be run through diff rather than compared by eye. Ozcode looked promising, but the trial is doing nothing. I suspect Ozcode only works for C#/.NET apps and I'm in C++ native.
I can navigate manually through the objects in the debugger and explore all the field values so clearly VS has access. I can query the values of the fields in the top level of an object in the Immediate Window by typing something of the form
?MyObject
I've seen mention of the possibility of running a loop in the Immediate Window- doesn't work here. Nor does using other suggestions of commands to dump to JSON using .NET assemblies, because I'm not in .NET
Using the Package Manager Console as a hacky way in, I can run PowerShell scripts that get at the debugger and iteratively dump fields to text to as deep a level as I want. It's easier than adding exploratory code and the scripts can be created and run while stopped at a breakpoint which makes it easier to explore what's going on, but it'd be so much easier if there were an extension adding a dump option to the context menu of an object in the debugger. I don't see anything obvious when searching. Does anyone know of such a thing?

creating a save point when debugging in visual studio

I have an error which is occurring only very late in my code (after it's been running for ~20 minutes) and so trying to pinpoint exactly where it is is tricky because I have a lot of recursive function calls and if I go too far the important variable values may have been changed. Is there a way I can set a kind of save point where all the variables have their values saved and which I can jump back to after I've done some exploring rather than having the run the whole thing again from the beginning?
I found this and just wanted to point out that Roger Lipscombe's comment is what I was also looking for:
Precisely: IntelliTrace https://learn.microsoft.com/en-us/visualstudio/debugger/intellitrace?view=vs-2022
and
Historical Debugging (which is part of IntelliTrace) https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2015/debugger/historical-debugging?view=vs-2015&redirectedfrom=MSDN
Only available on the enterprise version of Visual Studio
I have a workaround for this kind of issue: Using the Data Breakpoint, at least, it could output and save the value manually, and it also can help you check that what value was applied to your code line, I just get this idea from this case I met before:
Visual Studio. Debug. How to save to a file all the values a variable has had during the duration of a run?
If IntelliTrace tool is helpful for you like Roger Lipscombe's suggestion, one idea is that you could use IntelliTrace Standalone Collector tool without the VS in one machine:
https://msdn.microsoft.com/en-us/library/hh398365.aspx

How do I deploy specific objects from a Visual Studio database project?

I have a Visual Studio 2010 database project and I'd like to do a partial deployment, i.e. of specified objects. Is this possible? The only options I can see are to either do a full deployment or stop after generating the script.
For example, I'm changing many tables and stored procs but not everything is 100% finished and I'd like to push out a specific stored procedure to my test database, including its permissions, etc.
I read a little bit about SQL Server Data Tools, which apparently supports this, but I'm not clear on whether I'd have to migrate my database project to use that instead (would also need the ok from team lead), or if it's simply a plugin that would allow extra functionality.
Check out Schema Comparisons. They allow you to select the objects you want to deploy. They are available without SQL Server Data Tools.
A "partial deployment" is actually a little dangerous. Consider that you will have just built your database project, your entire database project, complete with the table changes, and it has built with no errors or warnings (right?). Now you want to deploy just your stored procedure, into a database that does not have the table changes.
Your stored procedure got no errors or warnings in the context of all the changes. Are you sure it will get no errors or warnings without those changes?
You should consider a source control solution to this problem. Save a copy of your stored procedure, revert to a version of the code that matches the database you'll be deploying to, then make your stored procedure changes to that. When you deploy, you will be checking to see if the stored procedure makes sense in the context of the database you'll be deploying into.

Entity Framework writing to unknown database

I'm fairly new with EF and got myself rather confused about what's going on with my solution.
I'm in the situation where my code appears to be working however the changes aren't being written to the database that I would expect.
I'm using Web Developer 2010 and SQL 2008, code first approach but choosing to make my own changes in the database and manually ensure my classes match correctly.
Things seemed ok until I came across an error where the db hash wasn't what entity framework was expecting, so I looked at modelBuilder.Conventions.Remove(); which wasn't available - it seems that's not around in the later versions. So, I figured if the later versions doesn't do this check, I'll go ahead and remove my reference to EF 4 and put the 4.3 dll in it's place. I think I also ended up deleting and renaming my database. That didn't work, so I followed up on something I found on Scott Gu's blog about naming your dbContext the same as the database, which seemed to help.
However I'm now getting the most bizarre scenario. My code is running, the data is saving, but it's not saving to my database. In fact I'm running a profile trace on the db and it doesn't even seem to be trying to connect. I can change my connection string to something invalid too, but my code will happily run, storing the data #somewhere#
Any ideas what's going on? Might it be using a local database or cache that I'm unaware of? Should I just start my small project again and pretend this never happened? That'd be the professional approach, right?
I would suggest to use Database first approach, if you have your Database set, or want to have maximum control over database.

Managing database scripts in your solutions

I usually create a solution folder in Visual Studio and put my DB scripts in them. I always use at least this set of scripts:
Drop model
Create model script
User functions
Stored procedures
Static data (lookup tables)
Test data (not deployed)
Then I simply combine them and run against an SQL Server so I'm able to recreate the whole DB in a single step (by combining these scripts into a single one and executing it).
Anyway. I've never used projects in either:
Visual Studio or
SQL Management Studio
I've tried creating SQL Server 2008 Database Project in Visual Studio 2010, but I'm somehow overwhelmed by all the possible server settings (which I prefer to stay default as set on the server anyway). So I'm a bit confused: Should I use this project template or should I just do the same thing I always did?
What do you use and why? What are advantages I may benefit from by using either?
If I were you I would continue to do it the way you are doing it. In fact I do! The advantages of having the actual .sql files right there in a folder for you to use/edit/look at in my opinion are far better than the advantages you get by using a DB project. DB Project would be used if you were doing something like Storage Reports, were you have to communicate with like 8 databases and compare then to 8 different databases and save result sets etc... Now don't get my wrong there are advantages of Database Projects, I just don't think they are actually doing much help when you have such a simple setup that works already.
Advantages of the SQL Server 2008 Database Project in VS10:
Not having to switch back and forth
from your current client you use to
communicate with your SQL server.
Decent Data and Schema compare tools.
Gives you a one-click way to reverse
engineer a database into source
control, and keep it up to date.
You can compare projects to physical
databases and vice-versa. (This makes it pretty easy to keep your database up to date, no matter where you make change it: file system database project, or in the physical database itself)
If the current tool your using is not specifically tailored to SQL Server, this one is.
Extremely helpful if you need to do
unit tests directly on the database
without using abstractions.
If you're looking for something a little less complicated, you might want to try SQL Source Control. This won't even require you to maintain scripts, as it doesn't this for you behind the scenes. It will, however, only work as a solution for you if you use either TFS or SVN. And it costs $295...
It has a 28-day trial period, so if you're happy to try it out, I'd be interested in your feedback.

Resources