I have two projects using legacy databases with no associations between the tables. In one, if I create associations in the DBML file, I can reference the associations in LINQ like this:
From c In context.Cities Where c.city_name = "Portland" _
Select c.State.state_name
(assuming I added the link from City.state_abbr to State.state_abbr in the DBML file.)
In a different project that uses a different database, adding the association manually doesn't seem to give me that functionality, and I'm forced to write the LINQ query like this:
From c In context.Cities Where c.city_name = "Portland" _
Join s In context.States On c.state_abbr = s.state_abbr _
Select s.state_name
Any idea what I could be missing in the second project?
Note: These are completely contrived examples - the real source tables are nothing like each other, and are very cryptic.
Check your Error List page. You might have something like the following in there:
DBML1062: The Type attribute
'[ParentTable]' of the Association
element 'ParentTable_ChildTable' of
the Type element 'ChildTable' does not
have a primary key. No code will be
generated for the association.
In which case all you should need to do is make sure that both tables have a primary key set and re-save the dbml file. This will invoke the custom tool, which will in turn update the designer.cs file and create code for the association.
It looks like my problem was my tables didn't have primary keys in the second project. Like I stated, these are legacy tables, so I had to do the linking and primary key stuff in the Database Context instead of the database itself, and I just forgot to specify the primary keys the second time around. Frustrating when you don't spot it, but it makes sense now.
Sometimes, when everything is configured correctly but still not working, the solution can be as simple as restarting Visual Studio.
I don't know why it happens sometimes, but I thought I should add this answer because having done some searching for a solution myself, it seems nobody has suggested this yet...
Related
I've been teaching myself SSDT for use on an upcoming project that I expect to be working on. My understanding of the "publish" operation is that it will take my SQL Server Data Project code, use that to generate something like a reference database, and then use that to compare against my target-deploy database, figure out what changes are required to get the schema into line with the reference db, and then make them.
But for a table rename, this did not happen, and I'm hoping somebody can explain what is wrong with my mental model of the process.
I've got a very simple "library" themed test database with tables like "Libraries", "Books", and "Categories". All very simple 2-3 columns just to experiment with. Then I added a 4th table "Books_MM_Categories" to represent a many-to-many link table between "Books" and "Categories".
I published that, and all was as expected. But, I'd deliberately named the link table 'wrong' to that I could try renaming it. So I renamed the sql file in my DB project, and changed its code to instead create a table named "Books_Categories_Link".
This time when I published, I expected the "Books_MM_Categories" table to be deleted from the DB, and the new one added... or to have some kind of sp_rename procedure show up to rename the table.
Instead, what I got was that both tables are now present. I can understand that my sloppy rename would have lost all the data, simply just causing one new table to be created, and the old one dropped, instead of ACTUALLY renamed... But what I can't figure out is why the original table is not dropped. In my mental model of how this works, a table/column/view/sproc that no longer exists in the reference should be likewise eliminated from the published database. If not, then I should expect to see some error messages telling me it chose not to drop the table because of anticipated dataloss.
I did see a couple of post explaining how to use the "refactor" option in the code view window... That is working as I would expect. So I understand how to do it properly going forward.
Can anybody explain whats wrong with my mental model of how this works? I'm sure its working as it is supposed to, but I'd like to understand where I went wrong. Why does a table not listed in my project not get deleted on publish (I've not tried it but expect the same exact behavior if I export a .dacpac first and then use that to perform the deployment of the new scheme.
Thanks
EDIT 1
Somewhat curiously, when running a "Schema Compare" operation, the extra table is detected and flagged for deletion.
Your mental model seems to be correct. Check 'Advanced' options in 'Publish Database' dialog.
In the 'Drop' tab you can enable 'Drop objects in target but not in source' to produce the intended result.
Working on a school assignment, building databases. First I built my logical model of my database (Using Oracle SQL developer data modeler). There's this little blue arrow button (engineer to relational) which will take your logical model and attempt to create a relational model.
WHEN it does this, it will automatically generate names for your foreign keys. This part infuriates me because it creates these ridiculous named foreign keys, I'm simply looking for a way to turn this feature off (OR I'm open to whatever suave way they actually deal with it) Because so far, I have to go back and meticulously rename all my foreign keys throughout the database.
As a specific example, I have a Client entity, with a unique ID called Client_ID. After the modeler engineers the relational model, it automatically concatenates the name of the entity along with the attribute (for the other entities that have Client_ID as a foreign key). So even though I'd like my foreign key to simply be Client_ID it creates Client_Client_ID. I've been googling for the better part of the day but I can't find anything related to this.
found the answer here: https://community.oracle.com/thread/4012092
rightclick the source file of your database (its this tree looking model in this window on the left of the modeler).
click settings
naming standard
templates
here you will see its format(s) for keys, constraints,etc where you can customize them
(found this right after i posted question, sorry. Swear it took me hours earlier though)
I'm currently running in a multi-DB SQL Server environment and using linq to sql to perform queries.
I'm using the approach documented here to achieve cross DB joins:
http://www.enderminh.com/blog/archive/2009/04/25/2654.aspx
so basically:
2 data contexts - Users and Payments
Users.dbo.UserDetails {PK: UserId }
Payments.dbo.CurrentPaymentMethod { PK: UserId }
I drag the tables onto the DBML, and in the properties window, change the Source from dbo.UserDetails to Users.dbo.UserDetails to fully qualify the DB name.
I can then issue a single data context cross DB join by doing something like:
var results = (from user in datacontext.Table<UserDetail>()
join paymentmethod in dataContext.Table<CurrentPaymentMethod>() on user.UserId equals paymentmethod.UserId
... rest of query here ...);
Now this is tickety boo and works as I want it to. The only problem I'm currently having is when schema updates etc. happen (which is relatively frequent as we're in a significant dev phase).
(and finally, the question!)
What I want to achieve (and I've marked the question up as T4 as a guess, as I know that the DBML files are T4 guided) is an automated way when I drag any table onto a data context that the Source automatically picks up the DB name (so will have Users.dbo.UserDetails instead of just dbo.UserDetails)?
Thanks for any pointers :)
Terry
Have a look at the T4 Toolbox and the LinqToSql code generator it provides (Courtesy of Oleg Sych) - You can customize the templates to generate references however you'd like, but I think the problem you're going to run into is that the database name isn't stored in the dbml file.
What you could probably do is add a filter to the generator, perhaps using a dictionary or similar, such that in your .tt file, you maintain a list of tables and the databases they belong to. That way, if your maintenance task is to delete the class from the designer and drop it on again, it will get the right database name.
I have been using LINQ to SQL for a while, and there is one thing that has always bothered me. Whenever I modify the schema of a table, in order to refresh it in the designer, I have to delete it and then add it back. That's fine, but this means I have to actually find the table in the designer. I have about 100+ tables in my database, and every time I do this, it's like finding a needle in a haystack. Well, maybe it's not that bad, but seriously, it takes way longer than it should.
Is there another option for refreshing tables that I am unaware of?
Some people use SqlMetal to 'refresh/update' their Linq2Sql designer. The designer does not have support for refreshing the schema, when the DB changes. You have to manually drop the table and re-add it back in.
ADO Entity Framework i believe can refresh. I've not used it, but I think I saw this at a TechEd demo this year.
Helpful Info: Google's results for SqlMetal.
This is not possible using the VS linq to sql designer.
You can do this using LLBLGEN PRO, a third party tool, instead of the built-in linq to sql designer. It isn't free but it does do a ton of other stuff as well, which of course you may or may not need.
LLBLGEN PRO is actually a full set of ORM tools, but also includes an enhanced linq-to-sql designer with 'refresh model from SQL' functionality.
See here for description of the issue - http://weblogs.asp.net/fbouma/archive/2008/05/01/linq-to-sql-support-added-to-llblgen-pro.aspx
And here for the tool - http://www.llblgen.com/
I don't do any customization of the content on the designer so after table changes I just hit CTRL+A followed by DEL. Then shift-select all of my tables and slap them back onto the designer. I don't have 100s of tables yet so not sure if things slow down at some point but with 20+ tables it just takes a second.
I have written an add-in that can do that (in both directions; database -> DBML or DBML- -> SQL-DDL diff script).
Unlike SQLMetal (or EF's "update model from database") mentioned in another reply, the add-in does a true sync/refresh; applying changes corresponding only to the differences between the model and the underlying db.
That means any customizations (renamed properties/navigation properties etc) that you have made in other areas of your model will not be removed/overwritten unless they are in conflict with the underlying db schema. (in which case you can still preserve them by adding them to the add-in's "exclusion list")
You can download it and get a free 30-day trial license from http://www.huagati.com/dbmltools/
I have a similar comment, thought it might fit in here for anybody out there Googling a solution to this issue...
When I change the columns that are returned by a stored procedure, deleting the procedure from the designer and re-adding it does not work. The custom return type entity that the designer generates does not reflect the changes to the SP.
I've tried disconnecting the DB in the server explorer, even deleting and re-adding the connection.
The only solution I've found is this:
1. Delete the SP from the designer.
2. Save the dbml file (or the whole solution, whatever)
3. Completely close Visual Studio.
4. Re-open Visual Studio and your solution.
5. Re-add the stored procedure to the designer.
I think that qualifies as a blue ribbon pain in the rump.
Anybody got a simpler solution?
PS- To those of you with 100+ tables: Go get a real (real == mature) ORM tool. I personally vote for NetTiers. It rocks. Used it for years with no (or at least very few) complaints. You'll probably have to buy CodeSmith to use it effectively, but it's worth it. The templates are open source. And there are templates for nHibernate as well. But I've found that I don't really dig on Java ports. If I'm gonna code on MS platforms I want code that was "born" there...
...editorial complete. :P
I have had simliar issues with the designer - the best thing I can suggest is creating multiple contexts for different areas of your data access - I broke mine down to as few a related tables as I could get away with for each functional area. You can re-use tables across contexts so it isn't a big deal.
There's a template for VS 2008 that replaces the designer, it should ease refreshing your LINQtoSQL classes: http://damieng.com/blog/2008/09/14/linq-to-sql-template-for-visual-studio-2008
There are a couple of other options:
Edit the .dbml file that the designer uses to draw the tables and generate the code. I've used this approach when the changes are small (adding a couple of columns, creating a simple table)
Use sqlmetal to create the required xml for the changed tables and move the declarations by hand to the .dbml file. This one is better for when the changes are either more complex or larger.
I personally detest using the designer, and I've had various issues with it whenever I've dared to use it.
I mostly use LINQ for very simple CRUD (no linked entities or anything), and if that's the case with you, it might be worth straying from the designer crutch. Especially since defining LINQ-to-SQL entities is as easy as this:
[Table("dbo.my_table")]
public class MyTable
{
[Column("id", AutoSync = AutoSync.OnInsert, IsDbGenerated = true, IsPrimaryKey = true)]
public Int32 Id { get; set; }
[Column("name", DbType="NVarChar(50) NOT NULL")]
public String Name { get; set; }
}
This way, all your entities have their own files, which makes finding them much easier, though you'll still have to add/update the properties manually.
Of course, if you'd refactor 100+ tables, that might not be an option ;)
I am working with a few legacy tables that have relationships, but those relationships haven't been explicitly set as primary/foreign keys. I created a .dbml file using "Linq To Sql Classes" and established the proper Case.CaseID = CaseInfo.CaseID association. My resulting class is CasesDataContext.
My Tables (One to many):
Case
------------------
CaseID (int not null)
MetaColumn1 (varchar)
MetaColumn2 (varchar)
MetaColumn3 (varchar)
...
CaseInfo
------------------
CaseInfoID (int)
CaseID (int nulls allowed)
CaseInfoMeta (varchar)
...
I'm new to LinqToSQL and am having trouble doing..
CasesDataContext db = new CasesDataContext();
var Cases = from c in db.Cases
where c.CaseInfo.CaseInfoMeta == "some value"
select c;
(Edit) My problem being that CaseInfo or CaseInfos
is not available as a member of Cases.
I heard from a colleague that I might try ADO.Net Entity Data Model to create my Data Context class, but haven't tried that yet and wanted to see if I'd be wasting my time or should I go another route. Any tips, links, help would be most appreciated.
Go back to the designer and check the relation is set up correctly. Here is one real life example, with BillStateMasters have "CustomerMasters1" property (customers for the state):
Ps. naming is being cleaned up ...
Update 1: You also need to make sure both tables have a primary defined. If the primary key isn't defined on the database (and can't be defined for whatever reason), make sure to define them in the designer. Open the column's properties, and set it as primary key. That said, entity tracking also won't work if you haven't a primary key for the entity, which for deletes means it silently doesn't updates the entity. So, make sure to review all entities and to have them all with a primary key (as I said, if it can't be on the db, then on the designer).
CasesDataContext db = new CasesDataContext();
var Cases = from c in db.Cases
join ci in db.CaseInfo on
ci.ID equals c.InfoID
where ci.CaseInfoMeta == "some value"
select new {CASE=c, INFO=ci};
my "join" linq is a bit rusty, but the above should get close to what you're after.
Is the association set to One to One or One to Many? If you have the association set to One to Many, then what you have is an EntitySet, not an EntityRef and you'll need to use a where clause on the dependent set to get the correct value. I suspect that you want a One to One relationship, which is not the default. Try changing it to One to One and see if you can construct the query.
Note: I'm just guessing because you haven't actually told us what the "trouble" actually is.
Your query looks correct and should return a query result set of Case objects.
So... what's the problem?
(Edit) My problem being that CaseInfo
is not available under Cases... i.e.
c.CaseInfo doesn't exist where I'm
assuming it would be if there were
explicit primary/foreign key
relationships.
What do you mean by "not available"? If you created the association in the designer as you say you did, then the query should generate SQL something along the lines of
SELECT [columns]
FROM Case INNER JOIN CaseInfo
ON Case.CaseID = CaseInfo.CaseID
WHERE CaseInfo.CaseInfoMeta = 'some value'
Have you debugged your linq query to get the SQL generated yet? What does it return?
Couple of things you might want to try:
Check the properties of the association. Make sure that the Parent property was created as Public. It does this by default, but something may have changed.
Since you're not getting CaseInfo on C, try typing it the other direction to see if you get ci.Case with intellisense.
Delete and recreate the association all together.
There's something very basic going wrong if the child members are not showing up. It might be best to delete the dbml and recreate the whole thing.
If all else fails, switch to NHibernate. :)
After a few tests, I'm pretty sure the FK relationships are required in the DB regardless of whatever associations are created in Linq-to-SQL. i.e. if you don't have them explicitly set in the DB, then you will have to do a join manually.
Is this c#? I think you need == instead of = on this line:
where c.CaseInfo.CaseInfoMeta = "some value"
should read
where c.CaseInfo.CaseInfoMeta == "some value"