How to make LINQ to SQL backward compatible - linq

I am new to LINQ to SQL.... I need to add a new column to an existing table and I updated the dbml to include this new field. Then in my C# code, I query the database and accessing this new field. Everything is fine with the new database; however, if I load back a previous database without this new field, my program would crash when it's accessing the database (obviously because of this new field). How do I make it, either the database or my C# code to support backward compatibility?
Here's my code snip
I added a field email to Customer table and also add it to the DataContext.dbml, below is the c# code
DataContext ctx = new DataConext ();
var cusList = ctx.Customer;
foreach (var c in cusList)
{
.
.
.
//access the new field
if (c.email != null)
displayEmail (email);
.
.
.
}
When I ran through debugger, it's crashing at the very first foreach loop if I am using an older version database without the new email field.
Thanks.

Make sure you upgrade old database. That's what updates are made for.
I don't think there's a better option. But i might be wrong.

Should be a code-land fix. Make your code check for the existance of the column, and use different queries on each case.

I agree with Arnis L.: Upgrade your old database. LTS is going to want to look for that column called email in your table, and will complain if it can't find it. I could suggest a workaround that would entail using a Stored Procedure, but you'd need to update your old database to use this stored proc, so it's not a very helpful suggestion :-)
How to update your old database? This is the old-school way:
ALTER TABLE Customers
ADD Email VARCHAR(130) NULL
You could execute this manually against the older database through Query Analyzer, for example. See here for full docs on ALTER TABLE: http://msdn.microsoft.com/en-us/library/aa275462%28SQL.80%29.aspx
If you are working on a team with very strict procedures for deployment from development to production systems, you would already be writing your "change scripts" to the database in this same fashion. However, if you are developing through Enterprise Manager, it might seem counter-productive to have to do the same work, a second time, just to keep old database schemas in sync with the latest schema.
For a friendlier, more "gooey" approach to this latter style of development, I myself can't recommend enough the usage of something like the very excellent Red Gate SQL Compare tools to help you keep multiple SQL Server databases in sync. (There are other 3rd party utilities out there that supposedly can roughly the same thing, and that might even be a little cheaper, but I haven't looked much further into them.)
Best of luck!-Mike

Related

Why does SSDT VS 2019 table rename not drop the table in DB on "Publish"

I've been teaching myself SSDT for use on an upcoming project that I expect to be working on. My understanding of the "publish" operation is that it will take my SQL Server Data Project code, use that to generate something like a reference database, and then use that to compare against my target-deploy database, figure out what changes are required to get the schema into line with the reference db, and then make them.
But for a table rename, this did not happen, and I'm hoping somebody can explain what is wrong with my mental model of the process.
I've got a very simple "library" themed test database with tables like "Libraries", "Books", and "Categories". All very simple 2-3 columns just to experiment with. Then I added a 4th table "Books_MM_Categories" to represent a many-to-many link table between "Books" and "Categories".
I published that, and all was as expected. But, I'd deliberately named the link table 'wrong' to that I could try renaming it. So I renamed the sql file in my DB project, and changed its code to instead create a table named "Books_Categories_Link".
This time when I published, I expected the "Books_MM_Categories" table to be deleted from the DB, and the new one added... or to have some kind of sp_rename procedure show up to rename the table.
Instead, what I got was that both tables are now present. I can understand that my sloppy rename would have lost all the data, simply just causing one new table to be created, and the old one dropped, instead of ACTUALLY renamed... But what I can't figure out is why the original table is not dropped. In my mental model of how this works, a table/column/view/sproc that no longer exists in the reference should be likewise eliminated from the published database. If not, then I should expect to see some error messages telling me it chose not to drop the table because of anticipated dataloss.
I did see a couple of post explaining how to use the "refactor" option in the code view window... That is working as I would expect. So I understand how to do it properly going forward.
Can anybody explain whats wrong with my mental model of how this works? I'm sure its working as it is supposed to, but I'd like to understand where I went wrong. Why does a table not listed in my project not get deleted on publish (I've not tried it but expect the same exact behavior if I export a .dacpac first and then use that to perform the deployment of the new scheme.
Thanks
EDIT 1
Somewhat curiously, when running a "Schema Compare" operation, the extra table is detected and flagged for deletion.
Your mental model seems to be correct. Check 'Advanced' options in 'Publish Database' dialog.
In the 'Drop' tab you can enable 'Drop objects in target but not in source' to produce the intended result.

Saving copy of old table entry to another table when updating table entry with SaveChanges()?

Im working on an online store project where I have already made it possible for an administrator to update different table entries via the store gui (like items, user profiles, orders etc). SaveChanges(); is used to save the changes.
Im currently trying to figure out how to make this work:
An entry in table "items" gets updated.
Before the entry in the table "items" gets updated, a copy of the old entry gets saved into a table named "history-items".
The copy that is saved to "history-items" preferably has a timestamp.
How would I go about doing this? (As you might tell, I just recently picked up visual studio, and am pretty new to everything)
Thank you.
There are atleast 3 ways to do this:
If you are using SQL Server 2008 or newer this is now built in functionality, see: http://msdn.microsoft.com/en-us/library/bb933994.aspx
If you opt not to use that then the simplest solution is to use database triggers.
If you want to do it in C# code, then you need to read the original values before saving, and save these original values to the history table. For reading original values see: How to get original values of an entity in Entity Framework?
I would go for option 1 if possible.

Cross Database Join with Linq - Updating T4 template to support DB name?

I'm currently running in a multi-DB SQL Server environment and using linq to sql to perform queries.
I'm using the approach documented here to achieve cross DB joins:
http://www.enderminh.com/blog/archive/2009/04/25/2654.aspx
so basically:
2 data contexts - Users and Payments
Users.dbo.UserDetails {PK: UserId }
Payments.dbo.CurrentPaymentMethod { PK: UserId }
I drag the tables onto the DBML, and in the properties window, change the Source from dbo.UserDetails to Users.dbo.UserDetails to fully qualify the DB name.
I can then issue a single data context cross DB join by doing something like:
var results = (from user in datacontext.Table<UserDetail>()
join paymentmethod in dataContext.Table<CurrentPaymentMethod>() on user.UserId equals paymentmethod.UserId
... rest of query here ...);
Now this is tickety boo and works as I want it to. The only problem I'm currently having is when schema updates etc. happen (which is relatively frequent as we're in a significant dev phase).
(and finally, the question!)
What I want to achieve (and I've marked the question up as T4 as a guess, as I know that the DBML files are T4 guided) is an automated way when I drag any table onto a data context that the Source automatically picks up the DB name (so will have Users.dbo.UserDetails instead of just dbo.UserDetails)?
Thanks for any pointers :)
Terry
Have a look at the T4 Toolbox and the LinqToSql code generator it provides (Courtesy of Oleg Sych) - You can customize the templates to generate references however you'd like, but I think the problem you're going to run into is that the database name isn't stored in the dbml file.
What you could probably do is add a filter to the generator, perhaps using a dictionary or similar, such that in your .tt file, you maintain a list of tables and the databases they belong to. That way, if your maintenance task is to delete the class from the designer and drop it on again, it will get the right database name.

DataAdapters against Typed DataSets = SQL Schema nightmares

I have seen many references stating that TableAdapters are weak and silly, and that any real dev would use DataAdapters. I don't know if that is true or not, but I am exploring the matter, and stressing out over how bad this whole 'DataAdapter/TableAdapter against a Typed DataSets' smells.
Let me try to explain...
Suppose I have my Typed DataSet defind in the xsd file, and now I'm ready to create a DataAdapter in code, against that schema...(By the way, I am using OleDb to access free-standing .dbf files in a folder... No SQL server stored procedures to call here, just plain old raw tables, ready for action.)
From my studies so far, here is how I see the DataAdapter used in conjunction with a Typed DataSet. Tell me if I am wrong. (Then I have my big complaint / question at the end.)
public DataTable GetJobsByCustomer(string CustNo)
{
OleDbConnection conn1 = new OleDbConnection(dbConnectionString);
conn1.Open();
LMVFP ds1 = new LMVFP(); //My Typed DataSet
string sqlstring = #"SELECT act_compda, contact, cust_num, est_cost, invoiced, job_hours,
job_invnum, job_num, job_remark, job_start, mach_cost, mat_cost, mat_mkup,
p_o_num, priority, quote_no, quoted_by, ship_date, ship_info, shop_notes, status, total_cost
FROM job_info
WHERE (cust_num = ?) AND (status = 'A')
ORDER BY priority";
OleDbDataAdapter JobsAdapter = new OleDbDataAdapter(sqlstring,conn1);
JobsAdapter.SelectCommand.Parameters.Add("?", OleDbType.VarChar,6).Value=CustNo;
JobsAdapter.Fill(ds1, "Jobs"); // A table schema in the Typed DataSet
return ds1.Jobs;
}
Is that how it goes? It does work, so that's good. And indeed the strongly typed behavior is great.
Now, my gripe.... You mean to tell me that I've got maintain the same exaxt SQL syntax in my DAL method (GetJobsByCustomer) to match the schema of the table in the xsd? It's crazy to have so much maintenance and dis-join between my hand-coded SQL and the xsd schema. There's no error cathing at all, since you are writing a text string!! You get to find out at run time if it will work.
When your typing all the SQL in code, it's terrible to have to look back and forth to keep your coded SQL in synch with the xsd table schema.
Surely I am missing something.
What a farce. The typed dataset works with beatiful intellisense and all, because it's generated from the schema, but when it comes down to it, it's just a pain to may to write SQL that matches the Typed schema. All they've done is move the headache to a new area.
Please tell me I am missing sometehing here that will make this much better.
I second Adam's appreciation for LINQ to SQL and EF, but I'm thinking this wouldn't be an option for you (yet) because of the lack of support for third-party DBMS. On the other hand, a third-party ORM (e.g. NHibernate) may be an option.
Perhaps I don't pay enough attention, but I'm not aware of any good reason to avoid TableAdapters vs DataAdapters. Do you have a link or two?
I don't believe you're missing anything; maintaining this type of code is never fun. Thankfully we now have LINQ to SQL and Entity Framework which can both reduce the amount of manual code maintenance necessary to keep your model objects in sync with your database.

Problem refreshing tables in the LINQ to SQL designer

I have been using LINQ to SQL for a while, and there is one thing that has always bothered me. Whenever I modify the schema of a table, in order to refresh it in the designer, I have to delete it and then add it back. That's fine, but this means I have to actually find the table in the designer. I have about 100+ tables in my database, and every time I do this, it's like finding a needle in a haystack. Well, maybe it's not that bad, but seriously, it takes way longer than it should.
Is there another option for refreshing tables that I am unaware of?
Some people use SqlMetal to 'refresh/update' their Linq2Sql designer. The designer does not have support for refreshing the schema, when the DB changes. You have to manually drop the table and re-add it back in.
ADO Entity Framework i believe can refresh. I've not used it, but I think I saw this at a TechEd demo this year.
Helpful Info: Google's results for SqlMetal.
This is not possible using the VS linq to sql designer.
You can do this using LLBLGEN PRO, a third party tool, instead of the built-in linq to sql designer. It isn't free but it does do a ton of other stuff as well, which of course you may or may not need.
LLBLGEN PRO is actually a full set of ORM tools, but also includes an enhanced linq-to-sql designer with 'refresh model from SQL' functionality.
See here for description of the issue - http://weblogs.asp.net/fbouma/archive/2008/05/01/linq-to-sql-support-added-to-llblgen-pro.aspx
And here for the tool - http://www.llblgen.com/
I don't do any customization of the content on the designer so after table changes I just hit CTRL+A followed by DEL. Then shift-select all of my tables and slap them back onto the designer. I don't have 100s of tables yet so not sure if things slow down at some point but with 20+ tables it just takes a second.
I have written an add-in that can do that (in both directions; database -> DBML or DBML- -> SQL-DDL diff script).
Unlike SQLMetal (or EF's "update model from database") mentioned in another reply, the add-in does a true sync/refresh; applying changes corresponding only to the differences between the model and the underlying db.
That means any customizations (renamed properties/navigation properties etc) that you have made in other areas of your model will not be removed/overwritten unless they are in conflict with the underlying db schema. (in which case you can still preserve them by adding them to the add-in's "exclusion list")
You can download it and get a free 30-day trial license from http://www.huagati.com/dbmltools/
I have a similar comment, thought it might fit in here for anybody out there Googling a solution to this issue...
When I change the columns that are returned by a stored procedure, deleting the procedure from the designer and re-adding it does not work. The custom return type entity that the designer generates does not reflect the changes to the SP.
I've tried disconnecting the DB in the server explorer, even deleting and re-adding the connection.
The only solution I've found is this:
1. Delete the SP from the designer.
2. Save the dbml file (or the whole solution, whatever)
3. Completely close Visual Studio.
4. Re-open Visual Studio and your solution.
5. Re-add the stored procedure to the designer.
I think that qualifies as a blue ribbon pain in the rump.
Anybody got a simpler solution?
PS- To those of you with 100+ tables: Go get a real (real == mature) ORM tool. I personally vote for NetTiers. It rocks. Used it for years with no (or at least very few) complaints. You'll probably have to buy CodeSmith to use it effectively, but it's worth it. The templates are open source. And there are templates for nHibernate as well. But I've found that I don't really dig on Java ports. If I'm gonna code on MS platforms I want code that was "born" there...
...editorial complete. :P
I have had simliar issues with the designer - the best thing I can suggest is creating multiple contexts for different areas of your data access - I broke mine down to as few a related tables as I could get away with for each functional area. You can re-use tables across contexts so it isn't a big deal.
There's a template for VS 2008 that replaces the designer, it should ease refreshing your LINQtoSQL classes: http://damieng.com/blog/2008/09/14/linq-to-sql-template-for-visual-studio-2008
There are a couple of other options:
Edit the .dbml file that the designer uses to draw the tables and generate the code. I've used this approach when the changes are small (adding a couple of columns, creating a simple table)
Use sqlmetal to create the required xml for the changed tables and move the declarations by hand to the .dbml file. This one is better for when the changes are either more complex or larger.
I personally detest using the designer, and I've had various issues with it whenever I've dared to use it.
I mostly use LINQ for very simple CRUD (no linked entities or anything), and if that's the case with you, it might be worth straying from the designer crutch. Especially since defining LINQ-to-SQL entities is as easy as this:
[Table("dbo.my_table")]
public class MyTable
{
[Column("id", AutoSync = AutoSync.OnInsert, IsDbGenerated = true, IsPrimaryKey = true)]
public Int32 Id { get; set; }
[Column("name", DbType="NVarChar(50) NOT NULL")]
public String Name { get; set; }
}
This way, all your entities have their own files, which makes finding them much easier, though you'll still have to add/update the properties manually.
Of course, if you'd refactor 100+ tables, that might not be an option ;)

Resources