Handling Exchange Web Services (EWS) missing properties - linq

I'm relatively comfortable with EWS programming and Exchange schemas, but running into an interesting problem to handle.
I have a propertyset, asking for:
ItemClass
DateTimeReceived
LastModifiedTime
Size
Every Item in the AllItems folder at the root.
I get the result set, and then attempt Linq queries against the set, particular to the DateTimeReceived. All Items don't have a DateTimeReceived returned by the server, and they except. I'm trying a...
long msgCount = (from msg in allItems
where !msg.DateTimeReceived.Equals(null)
select msg).Count();
... which (IMO) should return the count of allItems that have a DateTimeReceived. However, the property isn't null; it's just not there, throwing an exception.
I'm trying to avoid iterating through the set one by one, trying each record. Anyone have a thought or experience?

Thanks TTY for the input that definitely lead to the following code that returns what I need. (Still in final testing)
List<EWS.Item> noReceivedProperty = inputlist.Where(m => (m.GetType().GetProperty("DateTimeReceived") != null)).ToList<EWS.Item>();
Then of course, take noReceivedProperty.Count or such as needed.

Related

How can the User Interface know which commands is allowed to perform against an Aggregate Root?

The UI is decoupled from the domain, but the UI should try its best to never allow the user to issue commands that are sure to fail.
Consider the following example (pseudo-code):
DiscussionController
#Security(is_logged)
#Method('POST')
#Route('addPost')
addPostToDiscussionAction(request)
discussionService.postToDiscussion(
new PostToDiscussionCommand(request.discussionId, session.myUserId, request.bodyText)
)
#Method('GET')
#Route('showDiscussion/{discussionId}')
showDiscussionAction(request)
discussionWithAllThePosts = discussionFinder.findById(request.discussionId)
canAddPostToThisDiscussion = ???
// render the discussion to the user, and use `canAddPostToThisDiscussion` to show/hide the form
// from which the user can send a request to `addPostToDiscussionAction`.
renderDiscussion(discussionWithAllThePosts, canAddPostToThisDiscussion)
PostToDiscussionCommand
constructor(discussionId, authorId, bodyText)
DiscussionApplicationService
postToDiscussion(command)
discussion = discussionRepository.get(command.discussionId)
author = collaboratorService.authorFrom(discussion.Id, command.authorId)
post = discussion.createPost(postRepository.nextIdentity(), author, command.bodyText)
postRepository.add(post)
DiscussionAggregate
// originalPoster is the Author that started the discussion
constructor(discussionId, originalPoster)
// if the discussion is closed, you can't create a post.
// *unless* if you're the author (OP) that started the discussion
createPost(postId, author, bodyText)
if (this.close && !this.originalPoster.equals(author))
throw "Discussion is closed."
return new Post(this.discussionId, postId, author, bodyText)
close()
if (this.close)
throw "Discussion already closed."
this.close = true
isClosed()
return this.close
The user goes to /showDiscussion/123 and he see the discussion with the <form> from which he can submit a new post, but only if the discussion is not closed or the current user is who started that discussion.
Or, the user goes to /showDiscussion/123 where it's presented as a REST-as-in-HATEOAS API. A hypermedia link to /addPost will be provided, only if the discussion is not closed or the authenticated user is who started that discussion.
How can I provide that knowledge into the UI?
I could code that into the read model,
canAddPostToThisDiscussion = !discussionWithAllThePosts.discussion.isClosed
&& discussionWithAllThePosts.discussion.originalPoster.id == session.currentUserId
but then I need to maintain that logic and keep it in sync with the write model. This is a fairly simple example, but as the states transitions of an aggregate become more complex, it may become really hard to do. I'd like to image my aggregates as state machines, with their workflows (like the RESTBucks example). But I don't like the idea to move that business logic outside my domain model, and put it in a service that both the read side and write side can use.
Maybe this isn't the best example, but as an aggregate root is basically a consistency boundary, we know that we need to prevent invalid state transitions in its life cycle, and in each transitions to a new state some operations may become illegal and vice versa. So, how can the user interface know what is allowed or not? What are my alternative? How should I approach to this problem? Do you have any example to provide?
How can I provide that knowledge into the UI?
The easiest way is probably to share the domain model's understanding of what is possible with the UI. Ta Da.
Here's a way to think about it -- in the abstract, all of the write model logic has a fairly simple looking shape.
{
// Notice that these statements are queries
State currentState = bookOfRecord.getState()
State nextState = model.computeNextState(currentState, command)
// This statement is a command
bookOfRecord.replace(currentState, nextState)
}
Key ideas here: the book of record is the authority of state; everybody else (including the "write model") is working with a stale copy.
What the model represents is a collection of constraints that ensure that the business invariant is satisfied. Over the lifetime of a system, there might be many different sets of constraints, as the understanding of the business changes.
The write model is the authority for which collection of constraints is currently enforced when replacing the state in the book of record. Everybody else is working with a stale copy.
The staleness is something to keep in mind; in a distributed system, any validation you perform is provisional -- unless you have a lock on the state and a lock on the model, either could be changed while your messages are in flight.
This means that your validation is approximate anyway, so you don't need to be too concerned that you have all of the fiddly details right. You assume that your stale copy of the state is approximately right, and your current understanding of the model is approximately right, and if the command is valid given those pre-conditions, then it is checked enough to send.
I don't like the idea to move that business logic outside my domain model, and put it in a service that both the read side and write side can use.
I think the best answer here is "get over it". I get it; because having the business logic inside the aggregate root is what the literature is telling us to do. But if you continue to refactor, identifying common patterns and separating concerns, you'll see that entities are really just plumbing around a reference to state and a functional core.
AggregateRoot {
final Reference<State> bookOfRecord;
final Model<State,Command> theModel;
onCommand(Command command) {
State currentState = bookOfRecord.getState()
State nextState = model.computeNextState(currentState, command)
bookOfRecord.replace(currentState, nextState)
}
}
All we've done here is taken the "construct the next state" logic, which we used to have scattered through out the AggregateRoot, and encapsulated it into a separate responsibility boundary. Here, its specific to the root itself, but an equivalent refactoring it so pass it as an argument.
AggregateRoot {
final Reference<State> bookOfRecord;
onCommand(Model<State,Command> theModel, Command command) {
State currentState = bookOfRecord.getState()
State nextState = model.computeNextState(currentState, command)
bookOfRecord.replace(currentState, nextState)
}
}
In other words, the model, teased out from the plumbing of tracking state, is a domain service. The domain logic within the domain service is just as much a part of the domain model as the domain logic within the aggregate -- the two implementations are dual to one another.
And there's no reason that a read model of your domain shouldn't have access to a domain service.
I don't like the idea of sharing domain knowlegde (code) between the write and the read models as you will have to continously keep them in sync and that'd really a chalenge even if you are the only developer in your company.
But the good knews is that you don't have to duplicate anything. If you designed your Aggregate to be pure, with no side effect as you should do (!), you can simply send it the command but without persisting the changes. If the command throws an exception then the command would not succeed, otherwise the command would succeed. In case of CQRS this is even better as you have a 3rd outcome: idempotent command detection in which case the command succeeds but it has no effect (no events are raised but no exception is thrown either) and the UI might find this interesting.
So, as an example you could have something like this:
DiscussionController
#Security(is_logged)
#Method('POST')
#Route('addPost')
addPostToDiscussionAction(request)
discussionService.postToDiscussion(
new PostToDiscussionCommand(request.discussionId, session.myUserId, request.bodyText)
)
#Method('GET')
#Route('showDiscussion/{discussionId}')
showDiscussionAction(request)
discussionWithAllThePosts = discussionFinder.findById(request.discussionId)
canAddPostToThisDiscussion = discussionService.canPostToDiscussion(request.discussionId, session.myUserId, "some sample body")
// render the discussion to the user, and use `canAddPostToThisDiscussion` to show/hide the form
// from which the user can send a request to `addPostToDiscussionAction`.
renderDiscussion(discussionWithAllThePosts, canAddPostToThisDiscussion)
DiscussionApplicationService
postToDiscussion(command)
discussion = discussionRepository.get(command.discussionId)
author = collaboratorService.authorFrom(discussion.Id, command.authorId)
post = discussion.createPost(postRepository.nextIdentity(), author, command.bodyText)
postRepository.add(post)
canPostToDiscussion(discussionId, authorId, bodyText)
discussion = discussionRepository.get(discussionId)
author = collaboratorService.authorFrom(discussion.Id, authorId)
try
{
post = discussion.createPost(postRepository.nextIdentity(), author, bodyText)
return true
}
catch (exception)
{
return false
}
You could even have a method named whyCantPostToDiscussion that would return the exception or the exception message and display it in the UI.
There is only one issue with the code: the call to postRepository.nextIdentity() because it would increase the next ID every time but you could replace it with something like postRepository.getBiggestIdentity() that should have no side effect.
I find it is rare that authorization is actually part of the domain. If it isn't, it makes sense to move that logic out into its own service which the UI and the domain can make use of.
I like to build up a set of rules using the specification pattern. I find it to be a fairly elegant way to build up the rules.
This also plays very well in a CQRS context as you can run each command through the 'rules engine' before they get issued to your AR's. If you push queries through a message routeing system you can do the same for queries. I've had a lot of success with this approach.
The response you are looking for is HATEOAS, look no further. You must implement your rest api as really restful (level 3) adhering to hypertext to model the state transitions and return links to the clients (being the UI one of those). These links represent the actions the user can execute in its context according to the model state. It´s simple. If you return a link from the server then you bind it to a button in the UI, if you don´t return the link because of business invariants then you do not show the button on the UI. There is a lot more of concepts behind it such as designing a good API supporting a well designed domain model behind but this is the general idea around it and fits exactly what you want.

MSCRM 2011 EntitCollection and LINQ empty resultset

I have an EntityCollection ec in C# which has been populated with all Accounts.
Now I want another List or EntityCollection from ec which has all the accounts with status active.
I am using Linq for the same.
But both form of LINQ returns a an empty result while ec has 354 number of records
var activeCRMEC = (from cl in ec.Entities
where cl.Attributes["statecode"].ToString()=="0"
select cl);
OR
var activeCRMEC = ec.Entities.Where(x => x.Attributes["statecode"].ToString() == "0");
Each time the resultset is empty and I am unable to iterate over it. And 300 or so accounts are active, I have checked.
Same thing happens when I use some other attribute such as name etc.
Please be kind enough to point out my mistake.
You can Generate Early Bound Classes to write Linq Queries.
or Else
You can Write Linq Queries Using Late Bound Using OrganizationServiceContext Class.
For Your Reference:
OrganizationServiceContext OrgServiceCOntext = new OrganizationServiceContext(service);
var RetrieveAll = OrgServiceCOntext.CreateQuery("account").
ToList().Where(w => (w.GetAttributeValue<OptionSetValue>("statecode").Value ==0)).Select(s=>s);
I'll give you a few hints, and then tell you what I'm guessing your issue is.
First, use early bound entities. If you've never generated them before, use the earlybound generator, you'll save yourself a lot headaches.
Second, if you can't use early bound entities, use the GetAttribute() method on the Entity class. It'll convert types for you, and handle null reference issues.
Your LINQ expressions look to be correct, so either the ec.Entities doesn't have any entities in it that match the criteria of "statecode" equaling 0, or you possibly have some differed execution occurring on your IEnumerables. Try calling ToList() on the activeCRMEC immediately after the LINQ statement to ensure that is not your issue.
The statecode is an OptionSetValue, you should cast it in this way
((OptionSetValue)cl.Attributes["statecode"]).Value == 0
or
cl.GetAttributeValue<OptionSetValue>("statecode").Value == 0
Both ways are valid and you should ask for the Value that it is an int.
Hope this can help you.

Why does HasNext() return false, when

I am using libRets for .NET, and querying http://retsgw.flexmls.com/rets2_1/, using a valid user account. From C#, after calling Search() I check the count using GetCount() and I get 6300 results, but when I call HasNext() the first time returns false.
Checking the XML response, it looks like the results are empty () even though the result count provides a number.
So... where did the results go?
The exact query is the following:
http://retsgw.flexmls.com/rets2_1/Search?Class=OpenHouse&Count=1&QueryType=DMQL2&SearchType=OpenHouse&Select=ListingID&StandardNames=1
Here is the request:
SearchRequest request = client.CreateSearchRequest("OpenHouse", "OpenHouse", "");
request.SetStandardNames(true);
request.SetSelect("ListingID");
Here is how the request is made:
SearchResultSet result = client.Search(request);
Here is how the result is handled:
while (result.HasNext()) {
// Do something
}
So, it looks like the FlexMLS Support was able to help (rather quickly).
I needed to add &Format=COMPACT-DECODED to the query string.
So, in the code it would look like this:
request.SetFormatType(SearchRequest.FormatType.COMPACT_DECODED);
1) You are setting StandardNames to true AND then setting a selection. That selection may not exist in StandardNames. (You're reviewed the metadata returned by the server, right?) Its possible the server doesn't take the select into account when doing the count, but does on a full query, therefor it doesn't have any information to send back because it doesn't have a Table matching what you selected. What happens when you don't set the Select?
2) Have you done a packet trace or set libRETS to log the network traffic to a file? (I can't tell if that's what you mean by "Checking the XML response, it looks like the results are empty () even though the result count provides a number.") If you haven't, do that and see if the server is passing back any information.
If the server is passing pack information, you might have discovered a bug in libRETS and I invite you to join the libRETS-users mailing list and share this data (and that network trace) there.
If the server is passing back 0 results, you'll may need to contact your MLS and/or FlexMLS to see if you don't have permissions to view the results. Some RETS servers have fine grained results where you could get the count, but not get the data.

Reliable and efficient way to handle Azure Table Batch updates

I have an IEnumerable that I'd like to add to Azure Table in the most efficient way possible. Since every batch write has to be directed to the same PartitionKey, with a limit of 100 rows per write...
Does anyone want to take a crack at implementing this the "right" way as referenced in the TODO section? I'm not sure why MSFT didn't finish the task here...
Also I'm not sure if error handling will complicate this, or the correct way to implement it. Here is the code from the Microsoft Patterns and Practices team for Windows Azure "Tailspin Toys" demo
public void Add(IEnumerable<T> objs)
{
// todo: Optimize: The Add method that takes an IEnumerable parameter should check the number of items in the batch and the size of the payload before calling the SaveChanges method with the SaveChangesOptions.Batch option. For more information about batches and Windows Azure table storage, see the section, "Transactions in aExpense," in Chapter 5, "Phase 2: Automating Deployment and Using Windows Azure Storage," of the book, Windows Azure Architecture Guide, Part 1: Moving Applications to the Cloud, available at http://msdn.microsoft.com/en-us/library/ff728592.aspx.
TableServiceContext context = this.CreateContext();
foreach (var obj in objs)
{
context.AddObject(this.tableName, obj);
}
var saveChangesOptions = SaveChangesOptions.None;
if (objs.Distinct(new PartitionKeyComparer()).Count() == 1)
{
saveChangesOptions = SaveChangesOptions.Batch;
}
context.SaveChanges(saveChangesOptions);
}
private class PartitionKeyComparer : IEqualityComparer<TableServiceEntity>
{
public bool Equals(TableServiceEntity x, TableServiceEntity y)
{
return string.Compare(x.PartitionKey, y.PartitionKey, true, System.Globalization.CultureInfo.InvariantCulture) == 0;
}
public int GetHashCode(TableServiceEntity obj)
{
return obj.PartitionKey.GetHashCode();
}
}
Well, we (the patterns & practices team) just optimized for showing other things we considered useful. The code above is not really a "general purpose library", but rather a specific method for the sample that uses it.
At that moment we thought that adding that extra error handling would not add much, and we diceided to keep it simple, but....we might have been wrong.
Anyway, if you follow the link in the //TODO:, you will find another section of a previous guide we wrote that talks a little bit more on error handling in "complex" storage transactions (not in the "ACID" form though as transactions "ala DTC" are not supported in Windows Azure Storage).
Link is this: http://msdn.microsoft.com/en-us/library/ff803365.aspx
The limitations are listed in more detail there:
Only one instance of the entity should be present in the batch
Max 100 entities or 4 MB payload
Same PartitionKey (which is being handled in the code: notice that "batch" is only specified if there's a single Partition key)
etc.
Adding some extra error handling should not overcomplicate things too much, but depends on the type of app you are building on top of this and your preference to handle this higher or lower in your app stack. In our example, the app would never expect > 100 entities anyway, so it would simply bubble the exception up if that situation happens (because it should be truly exceptional). Same with the total size. The use cases implemented in the app make it impossible to have the same entity in the same collection, so again, that should never happen (and if it happens, it wouls simply throw)
All "entity group transactions" limitations are documented here: http://msdn.microsoft.com/en-us/library/dd894038.aspx
Let us know how it goes! I'm also interested to know if other pieces of the guide were useful for you.

Debugging LINQ to SQL SubmitChanges()

I am having a really hard time attempting to debug LINQ to SQL and submitting changes.
I have been using http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx, which works great for debugging simple queries.
I'm working in the DataContext Class for my project with the following snippet from my application:
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.UpdateJobMaster(newJobToCreate);
this.SubmitChanges();
I will catch some very odd exceptions when I run this.SubmitChanges;
Index was outside the bounds of the array.
The stack trace goes places I cannot step into:
at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager`3.TryCreateKeyFromValues(Object[] values, MultiKey`2& k)
at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache`2.Find(Object[] keyValues)
at System.Data.Linq.IdentityManager.StandardIdentityManager.Find(MetaType type, Object[] keyValues)
at System.Data.Linq.CommonDataServices.GetCachedObject(MetaType type, Object[] keyValues)
at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance)
at System.Data.Linq.ChangeProcessor.BuildEdgeMaps()
at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges()
at JobTrakDataContext.CreateNewJob(NewJob job, String userName) in D:\JobTrakDataContext.cs:line 1119
Does anyone have any tools or techniques they use? Am I missing something simple?
EDIT:
I've setup .net debugging using Slace's suggestion, however the .net 3.5 code is not yet available: http://referencesource.microsoft.com/netframework.aspx
EDIT2:
I've changed to InsertOnSubmit as per sirrocco's suggestion, still getting the same error.
EDIT3:
I've implemented Sam's suggestions trying to log the SQL generated and to catch the ChangeExceptoinException. These suggestions do not shed any more light, I'm never actually getting to generate SQL when my exception is being thrown.
EDIT4:
I found an answer that works for me below. Its just a theory but it has fixed my current issue.
I always found useful to know exactly what changes are being sent to the DataContext in the SubmitChanges() method.
I use the DataContext.GetChangeSet() method, it returns a ChangeSet object instance that holds 3 read-only IList's of objects which have either been added, modified, or removed.
You can place a breakpoint just before the SubmitChanges method call, and add a Watch (or Quick Watch) containing:
ctx.GetChangeSet();
Where ctx is the current instance of your DataContext, and then you'll be able to track all the changes that will be effective on the SubmitChanges call.
First, thanks everyone for the help, I finally found it.
The solution was to drop the .dbml file from the project, add a blank .dbml file and repopulate it with the tables needed for my project from the 'Server Explorer'.
I noticed a couple of things while I was doing this:
There are a few tables in the system named with two words and a space in between the words, i.e. 'Job Master'. When I was pulling that table back into the .dbml file it would create a table called 'Job_Master', it would replace the space with an underscore.
In the orginal .dbml file one of my developers had gone through the .dbml file and removed all of the underscores, thus 'Job_Master' would become 'JobMaster' in the .dbml file. In code we could then refer to the table in a more, for us, standard naming convention.
My theory is that somewhere, the translation from 'JobMaster' to 'Job Master' while was lost while doing the projection, and I kept coming up with the array out of bounds error.
It is only a theory. If someone can better explain it I would love to have a concrete answer here.
My first debugging action would be to look at the generated SQL:
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.UpdateJobMaster(newJobToCreate);
this.Log = Console.Out; // prints the SQL to the debug console
this.SubmitChanges();
The second would be to capture the ChangeConflictException and have a look at the details of failure.
catch (ChangeConflictException e)
{
Console.WriteLine("Optimistic concurrency error.");
Console.WriteLine(e.Message);
Console.ReadLine();
foreach (ObjectChangeConflict occ in db.ChangeConflicts)
{
MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType());
Customer entityInConflict = (Customer)occ.Object;
Console.WriteLine("Table name: {0}", metatable.TableName);
Console.Write("Customer ID: ");
Console.WriteLine(entityInConflict.CustomerID);
foreach (MemberChangeConflict mcc in occ.MemberConflicts)
{
object currVal = mcc.CurrentValue;
object origVal = mcc.OriginalValue;
object databaseVal = mcc.DatabaseValue;
MemberInfo mi = mcc.Member;
Console.WriteLine("Member: {0}", mi.Name);
Console.WriteLine("current value: {0}", currVal);
Console.WriteLine("original value: {0}", origVal);
Console.WriteLine("database value: {0}", databaseVal);
}
}
}
You can create a partial class for your DataContext and use the Created or what have you partial method to setup the log to the console.out wrapped in an #if DEBUG.. this will help you to see the queries executed while debugging any instance of the datacontext you are using.
I have found this useful while debugging LINQ to SQL exceptions..
partial void OnCreated()
{
#if DEBUG
this.Log = Console.Out;
#endif
}
The error you are referring to above is usually caused by associations pointing in the wrong direction. This happens very easily when manually adding associations to the designer since the association arrows in the L2S designer point backwards when compared to data modelling tools.
It would be nice if they threw a more descriptive exception, and maybe they will in a future version. (Damien / Matt...?)
This is what I did
...
var builder = new StringBuilder();
try
{
context.Log = new StringWriter(builder);
context.MY_TABLE.InsertAllOnSubmit(someData);
context.SubmitChanges();
}
finally
{
Log.InfoFormat("Some meaningful message here... ={0}", builder);
}
A simple solution could be to run a trace on your database and inspect the queries run against it - filtered ofcourse to sort out other applications etc. accessing the database.
That ofcourse only helps once you get past the exceptions...
VS 2008 has the ability to debug though the .NET framework (http://blogs.msdn.com/sburke/archive/2008/01/16/configuring-visual-studio-to-debug-net-framework-source-code.aspx)
This is probably your best bet, you can see what's happening and what all the properties are at the exact point in time
Why do you do UpdateJobMaster on a new instance ? Shouldn't it be InsertOnSubmit ?
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.InsertOnSubmit(newJobToCreate);
this.SubmitChanges();
This almost certainly won't be everyone's root cause, but I encountered this exact same exception in my project - and found that the root cause was that an exception was being thrown during construction of an entity class. Oddly, the true exception is "lost" and instead manifests as an ArgumentOutOfRange exception originating at the iterator of the Linq statement that retrieves the object/s.
If you are receiving this error and you have introduced OnCreated or OnLoaded methods on your POCOs, try stepping through those methods.
Hrm.
Taking a WAG (Wild Ass Guess), it looks to me like LINQ - SQL is trying to find an object with an id that doesn't exist, based somehow on the creation of the JobMaster class. Are there foreign keys related to that table such that LINQ to SQL would attempt to fetch an instance of a class, which may not exist? You seem to be setting the ProjectID of the new object to a string - do you really have an id that's a string? If you're trying to set it to a new project, you'll need to create a new project and get its id.
Lastly, what does UpdateJobMaster do? Could it be doing something such that the above would apply?
We have actually stopped using the Linq to SQL designer for our large projects and this problem is one of the main reasons. We also change a lot of the default values for names, data types and relationships and every once in a while the designer would lose those changes. I never did find an exact reason, and I can't reliably reproduce it.
That, along with the other limitations caused us to drop the designer and design the classes by hand. After we got used to the patterns, it is actually easier than using the designer.
I posted a similar question earlier today here: Strange LINQ Exception (Index out of bounds).
It's a different use case - where this bug happens during a SubmitChanges(), mine happens during a simple query, but it is also an Index out of range error.
Cross posting in this question in case the combination of data in the questions helps a good Samaritan answer either :)
Check that all the "primary key" columns in your dbml actually relate to the primary keys on the database tables. I just had a situation where the designer decided to put an extra PK column in the dbml, which meant LINQ to SQL couldn't find both sides of a foreign key when saving.
I recently encountered the same issue: what I did was
Proce proces = unit.Proces.Single(u => u.ProcesTypeId == (from pt in context.ProcesTypes
where pt.Name == "Fix-O"
select pt).Single().ProcesTypeId &&
u.UnitId == UnitId);
Instead of:
Proce proces = context.Proces.Single(u => u.ProcesTypeId == (from pt in context.ProcesTypes
where pt.Name == "Fix-O"
select pt).Single().ProcesTypeId &&
u.UnitId == UnitId);
Where context was obviously the DataContext object and "unit" an instance of Unit object, a Data Class from a dbml file.
Next, I used the "proce" object to set a property in an instance of another Data Class object. Probably the LINQ engine could not check whether the property I was setting from the "proce" object, was allowed in the INSERT command that was going to have to be created by LINQ to add the other Data Class object to the database.
I had the same non speaking error.
I had a foreign key relation to a column of a table that was not the primary key of the table, but a unique column.
When I changed the unique column to be the primary key of the table the problem went away.
Hope this helps anyone!
Posted my experiences with this exception in an answer to SO# 237415
I ended up on this question when trying to debug my LINQ ChangeConflictException. In the end I realized the problem was that I manually added a property to a table in my DBML file, but I forgot to set the properties like Nullable (should have been true in my case) and Server Data Type
Hope this helps someone.
This is a long time ago, but I had the same problem and the error was because of a trigger with a select statement. Something like
CREATE TRIGGER NAME ON TABLE1 AFTER UPDATE AS SELECT table1.key from table1
inner join inserted on table1.key = inserted.key
When linq-to-sql runs the update command, it also runs a select statement to receive the auto generated values in the same query and expecting the first record set to contains the columns "asked for" but in this case the first row was the columns from the select statement in the trigger. So linq-to-sql was expecting two autogenerated columns, but it only received one column (with wrong data) and that was causing this exception.

Resources