How can I reject all changes in a Linq to SQL's DataContext? - linq

On Linq to SQL's DataContext I am able to call SubmitChanges() to submit all changes.
What I want is to somehow reject all changes in the datacontext and rollback all changes (preferable without going to the database).
Is this possible?

Why not discard the data context and simply replace it with a new instance?

public static class DataContextExtensions
{
/// <summary>
/// Discard all pending changes of current DataContext.
/// All un-submitted changes, including insert/delete/modify will lost.
/// </summary>
/// <param name="context"></param>
public static void DiscardPendingChanges(this DataContext context)
{
context.RefreshPendingChanges(RefreshMode.OverwriteCurrentValues);
ChangeSet changeSet = context.GetChangeSet();
if (changeSet != null)
{
//Undo inserts
foreach (object objToInsert in changeSet.Inserts)
{
context.GetTable(objToInsert.GetType()).DeleteOnSubmit(objToInsert);
}
//Undo deletes
foreach (object objToDelete in changeSet.Deletes)
{
context.GetTable(objToDelete.GetType()).InsertOnSubmit(objToDelete);
}
}
}
/// <summary>
/// Refreshes all pending Delete/Update entity objects of current DataContext according to the specified mode.
/// Nothing will do on Pending Insert entity objects.
/// </summary>
/// <param name="context"></param>
/// <param name="refreshMode">A value that specifies how optimistic concurrency conflicts are handled.</param>
public static void RefreshPendingChanges(this DataContext context, RefreshMode refreshMode)
{
ChangeSet changeSet = context.GetChangeSet();
if (changeSet != null)
{
context.Refresh(refreshMode, changeSet.Deletes);
context.Refresh(refreshMode, changeSet.Updates);
}
}
}
Refer to Linq to SQL - Discard Pending Changes

In .net 3.0 use the db.GetChangeSet().Updates.Clear() for updated, db.GetChangeSet().Inserts.Clear() for new or db.GetChangeSet().Deletes.Clear() for deleted items.
In .net 3.5 and above the result of GetChangeSet() is now readonly, loop the collection in for or foreach and refresh every ChangeSet table like also macias wrote in his comment.

As Haacked said, just drop the data context.
You probably shouldn't keep the data context alive for a long time. They're designed to be used in a transactional manner (i.e. one data context per atomic work unit). If you keep a data context alive for a long time, you run a greater risk of generating a concurrency exception when you update a stale entity.

Calling Clear() on the Updates, Deletes and Inserts collection does not work.
GetOriginalEntityState() can be useful, but it only gives the IDs for foreign key relationships, not the actual entities so you're left with a detached object.
Here's an article that explains how to discard changes from the data context: http://graemehill.ca/discard-changes-in-linq-to-sql-datacontext
EDIT: Calling Refresh() will undo updates, but not deletes and inserts.

The Refresh will work, however you have to give the entities you want to reset.
For example
dataContext.Refresh(RefreshMode.OverwriteCurrentValues, someObject);

You can use the GetOriginalEntityState(..) to get the original values for the objects e.g. Customers using the old cached values.
You can also iterate through the changes e.g. updates and refresh the specific objects only and not the entire tables because the performance penalty will be high.
foreach (Customer c in MyDBContext.GetChangeSet().Updates)
{
MyDBContext.Refresh(System.Data.Linq.RefreshMode.OverwriteCurrentValues, c);
}
this will revert the changes using persisted data in the database.
Another solution is to dump the datacontext you use, using Dispose().
In any case it is a good practice to override the Insert and Remove methods in the collection of e.g. Customers you use and add e.g. an InsertOnSubmit() call. This will resolve your issue with pending Insertions and Deletions.

My application is outlook style with a icon to select an active form (ListBox). Before allowing the user to change their context they have to accept changes or discard them.
var changes = db.GetChangeSet();
if ((changes.Updates.Count > 0) || (changes.Inserts.Count > 0) || (changes.Deletes.Count > 0))
{
if (MessageBox.Show("Would you like to save changes?", "Save Changes", MessageBoxButton.YesNo) == MessageBoxResult.Yes)
{
db.SubmitChanges();
} else
{
//Rollback Changes
foreach (object objToInsert in changes.Inserts)
{
db.GetTable(objToInsert.GetType()).DeleteOnSubmit(objToInsert);
}
foreach (object objToDelete in changes.Deletes)
{
db.GetTable(objToDelete.GetType()).InsertOnSubmit(objToDelete);
}
foreach (object objToUpdate in changes.Updates)
{
db.Refresh(RefreshMode.OverwriteCurrentValues, objToUpdate);
}
CurrentForm.SetObject(null); //Application Code to Clear active form
RefreshList(); //Application Code to Refresh active list
}
}

Excellent write up on here, but here is a copy and paste of the code used.
Public Sub DiscardInsertsAndDeletes(ByVal data As DataContext)
' Get the changes
Dim changes = data.GetChangeSet()
' Delete the insertions
For Each insertion In changes.Inserts
data.GetTable(insertion.GetType).DeleteOnSubmit(insertion)
Next
' Insert the deletions
For Each deletion In changes.Deletes
data.GetTable(deletion.GetType).InsertOnSubmit(deletion)
Next
End Sub
Public Sub DiscardUpdates(ByVal data As DataContext)
' Get the changes
Dim changes = data.GetChangeSet()
' Refresh the tables with updates
Dim updatedTables As New List(Of ITable)
For Each update In changes.Updates
Dim tbl = data.GetTable(update.GetType)
' Make sure not to refresh the same table twice
If updatedTables.Contains(tbl) Then
Continue For
Else
updatedTables.Add(tbl)
data.Refresh(RefreshMode.OverwriteCurrentValues, tbl)
End If
Next
End Sub

Here is how I did it. I just followed Teddy's example above and simplified it. I have one question though, why even bother with the refresh on the DELETES?
public static bool UndoPendingChanges(this NtsSuiteDataContext dbContext)
{
if (dbContext.ChangesPending())
{
ChangeSet dbChangeSet = dbContext.GetChangeSet();
dbContext.Refresh(RefreshMode.OverwriteCurrentValues, dbChangeSet.Deletes);
dbContext.Refresh(RefreshMode.OverwriteCurrentValues, dbChangeSet.Updates);
//Undo Inserts
foreach (object objToInsert in dbChangeSet.Inserts)
{
dbContext.GetTable(objToInsert.GetType()).DeleteOnSubmit(objToInsert);
}
//Undo deletes
foreach (object objToDelete in dbChangeSet.Deletes)
{
dbContext.GetTable(objToDelete.GetType()).InsertOnSubmit(objToDelete);
}
}
return true;
}

//This works for me, 13 years later using Dotnet (Core) 6:
dbContext.ChangeTracker.Clear();

Related

Concerned about the size of my Aggregate Root [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am new to DDD and have a concern about the size of my Aggregate Root. The object graph is like the image below. (They are collections). The problem is all of the entities depend on the state of the AggregateRoot (Event). My question is: how do I break the aggregate into smaller aggregates? It's like I have a "God" like aggregate root that just manages everything.
This is very simplistic view of my domain:
and these are the rules:
An event has a number of different states. (implemented state design
pattern here).
An event has a collection of sessions. (but only 1 can be active at a time and only if the event is in the correct state).
A session has two states: Active and Ended.
A session has a collection of Guests.
A session has a collection of photos. (Maximum
of 10).
When a session is deleted. It should delete all its children.
When a session has ended and a photo is deleted it should
check to see if there are any other photos that belong to the
session. If not it should also delete the session.
When a session has ended and a photo is deleted sometimes it should throw an exception depending on the state of the event.
When a session is active and a photo is deleted. It should not worry about whether or not the session has any other photos or not.
When a session ends it must have at least 1 photo and at least 1 guest.
A photo can be updated but only if the event is in the right state.
When an event is deleted it should delete all its children.
Edit: I have divided the 1 aggregate into smaller aggregates so that Event, Session and Photo are all ARs. The issues is a session needs to perform a check on the Event AR before starting. Is it perfectly ok to inject an event object into the sessions start method Session.Start(Event #event) or will I have concurrency issues as outlined in some of the comments?
As a first step, the following 3 articles will be invaluable: http://dddcommunity.org/library/vernon_2011/
With DDD you are splitting the entities up in to boundaries where the state is valid after a single operation from an external source completes (i.e. a method call).
Think in terms of the business problem you are trying to solve - you have used the word delete a lot...
Does delete even have a place in the wording of the business experts for whom you are designing the system? Thinking in terms of the real world and not database infrastructure, unless you can create a time machine to travel back in time and stop an event from starting and therefore change history, the word delete has no real world analogy.
If you are forcing yourself to delete children on delete, that means that operation would need to become a transaction so things that may not make sense to sit inside the aggregate root are forced too (so that the state of the entity and all its children can be controlled and assured to be valid once the method call completes). Yes there are things where you can do with a transaction across multiple aggregate roots, but these are very rare situations and to be avoided if possible.
Eventual consistency is used as an alternative to transactions and reduce complexity, if you speak to the person for whom the system is being designed, you will probably find that a delay of seconds or minutes is more than acceptable. This is plenty of time to fire off an event, to which some other business logic is listening and takes necessary action. Using eventual consistency removes the headaches that come with transactions.
Photos could take up a lot of storage yes, so you would probably need a cleanup mechanism that runs after an event is marked as finished. I would probably fire off an event once the session is marked closed, a different system somewhere else would listen for this event and after 1 year (or whatever makes sense for you) remove this from a server... assuming you used an array of string[10] for your URLs.
If this is the maximum extent of your business logic, then don't only focus on DDD, it seems like this could be a good fit for Entity Framework which is essentially CRUD and has cascade deletes built in.
Edits answer
What is a photo, does it contain attributes? Is it not instead something like a Url to a photo, or a path to a picture file?
I'm not yet thinking of databases, that should be the very last thing that is thought of and the solution should be database/technology agnostic. I see the rules as:
An event has many sessions.
A Session has the following states: NotStarted, Started and Ended.
A Session has a collection of Guests, I'm going to assume these are unique (in that two guests with the same name are not the same, so a guest should be an aggregate root).
An Event has one active Session.
When there are no active Sessions, an Event can be marked as Finished.
No Sessions can be started once an Event is marked as Finished.
A session has a collection of up to 10 photos.
When a session has ended, a photo cannot be removed.
A Session can not start if there are no Guests A Session can not end if there are no Photos.
You cannot return the Session directly, as a user of your code may call Start() on the session, you will need someway of checking with the Event that this cannot be started, so you can chain up to the root this is why I pass in the event to the Session. If you don't like this way, then just put the methods that manipulate the Session on the Event (so everything is accessed via the Event, which is enforcing all the rules).
In the simplest case, I see the photo as a string (value object) in the Session entity. As a first stab I would do something like this:
// untested, do not know if will compile!
public class Event
{
List<Session> sessions = new List<Session>();
bool isEventClosed = false;
EventId NewSession(string description, string speaker)
{
if(isEventClosed==true)
throw new InvalidOperationException("cannot add session to closed event");
// create a new session, what will you use for identity, string, guid etc
var sessionId = new SessionId(); // in this case autogenerate a guid inside this class
this.sessions.Add(new Session(sessionId, description, speaker));
}
Session GetSession(EventId id)
{
reutrn this.sessions.FirstOrDefault(x => x.id == id);
}
bool CanStartSession(Session session)
{
// TO DO: do a check session is in our array!!
if(this.isEventClosed == true)
return false;
foreach(var session in sessions)
{
if(session.IsStarted()==true)
return false;
}
return true;
}
}
public class Session
{
List<GuestId> guests = new List<GuestId>(); // list of guests
List<string> photoUrls = new List<string>(); // strings to photo urls
readonly SessionId id;
DateTime started = null;
DateTime ended = null;
readonly Event parentEvent;
public Session(Event parent, SessionId id, string description, string speaker)
{
this.id = id;
this.parentEvent = parent;
// store all the other params
}
void AddGuest(GuestId guestId)
{
this.guests.Add(guestId);
}
void RemoveGuest(GuestId guestId)
{
if(this.IsEnded())
throw new InvalidOperationException("cannot remove guest after event has ended");
}
void AddPhoto(string url)
{
if(this.photos.Count>10)
throw new InvalidOperationException("cannot add more than 10 photos");
this.photos.Add(url);
}
void Start()
{
if(this.guests.Count == 0)
throw new InvalidOperationException("cant start session without guests");
if(CanBeStarted())
throw new InvalidOperationException("already started");
if(this.parentEvent.CanStartSession()==false)
throw new InvalidOperationException("another session at our event is already underway or the event is closed");
this.started = DateTime.UtcNow;
}
void End()
{
if(IsEnded()==true)
throw new InvalidOperationException("session already ended");
if(this.photos.length==0)
throw new InvalidOperationException("cant end session without photos");
this.ended = DateTime.UtcNow;
// can raise event here that session has ended, see mediator/event-hander pattern
}
bool CanBeStarted()
{
return (IsStarted()==false && IsEnded()==false);
}
bool IsStarted()
{
return this.started!=null;
}
bool IsEnded()
{
return this.ended!=null;
}
}
No warranty on the above, and may well need to change over time as the understanding evolves and as you see better ways to re-factor the code.
A guest cannot be removed once a session has ended - this logic has been added with a simple test.
Talk about deletion of guests and leaving sessions with 0 guests - you have stated that guests cannot be removed once an event has ended... by allowing that to happen at any point would be in violation of that business rule, so it can't ever happen, ever. Besides, using the term to delete a person in your problem space makes no sense as people cannot be deleted, they existed and will always have a record that they existed. This database term delete belongs in the database, not in this domain model as you have described it.
Is this.parentEvent.CanStartSession()==false safe? No it is not multithread safe, but commands would be ran independently, perhaps in parallel, each in their own thread:
void HandleStartSessionCommand(EventId eventId, SessionId sessionId)
{
// repositories etc, have been provided in constructor
var event = repository.GetById(eventId);
var session = event.GetSession(sessionId);
session.Start();
repository.Save(session);
}
If we were using event sourcing then inside the repository it is writing the stream of changed events in a transaction, and the aggregate root's current version is used so we can detect any changes. So in terms of event sourcing, a change to the Session would indeed be a change to its parent aggregate root, since it doesn't make sense to refer to a Session event in its own right (it will always be a Event event, it cannot exist independently). Obviously the code I have given in my example is not event sourced but could be written as so.
If event sourcing is not used then depending on the transaction implementation, you could wrap the command handler in a transaction as a cross cutting concern:
public TransactionalCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand>
{
private ICommandHandler<TCommand> decoratedHandler;
public TransactionalCommandHandlerDecorator(
ICommandHandler<TCommand> decoratedHandler)
{
this.decoratedHandler = decoratedHandler;
}
public void Handle(TCommand command)
{
using (var scope = new TransactionScope())
{
this.decoratedHandler.Handle(command);
scope.Complete();
}
}
}
In short, we are using the infrastructure implementation to provide concurrency safety.

NHibernate IPreUpdateEventListener doesn't insert to second table

I'm prototyping some simple audit logging functionality. I have a mid sized entity model (~50 entities) and I'd like to implement audit logging on about 5 or 6. Ultimately I'd like to get this working on Inserts & Deletes as well, but for now I'm just focusing on the updates.
The problem is, when I do session.Save (or SaveOrUpdate) to my auditLog table from within the EventListener, the original object is persisted (updated) correctly, but my AuditLog object never gets inserted.
I think it's a problem with both the Pre and Post event listeners being called to late in the NHibernate save life cycle for the session to still be used.
//in my ISessionFactory Build method
nHibernateConfiguration.EventListeners.PreUpdateEventListeners =
new IPreUpdateEventListener[]{new AuditLogListener()};
//in my AuditLogListener
public class AuditLogListener : IPreUpdateEventListener
{
public bool OnPreUpdate(PreUpdateEvent #event)
{
string message = //code to look at #event.Entity & build message - this works
if (!string.IsNullOrEmpty(message))
AuditLogHelper.Log(message, #event.Session); //Session is an IEventSource
return false; //Don't veto the change
}
}
//In my helper
public static void Log(string message, IEventSource session)
{
var user = session.QueryOver<User>()
.Where(x => x.Name == "John")
.SingleOrDefault();
//have confirmed a valid user is found
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = user
};
(session as ISession).SaveOrUpdate(logItem);
}
When it hits the session.SaveOrUpdate() in the last method, no errors occur. No exceptions are thrown. it seems to succeed and moves on. But nothing happens. The audit log entry never appears in the database.
The only way I've been able to get this to work it to create a completely new Session & Transaction inside this method, but this isn't really ideal, as the code proceeds back out of the listener method, hits the session.Transaction.Commit() in my main app, and if that transaction fails, then I've got an orphaned log message in my audit table for somethign that never happened.
Any pointers where I might be going wrong ?
EDIT
I've also tried to SaveOrUpdate the LogItem using a child session from the events based on some comments in this thread. http://ayende.com/blog/3987/nhibernate-ipreupdateeventlistener-ipreinserteventlistener
var childSession = session.GetSession(EntityMode.Poco);
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = databaseLogin.User
};
childSession.SaveOrUpdate(logItem);
Still nothing appears in my Log table in the db. No errors or exceptions.
You need to create a child session, currentSession.GetSession(EntityMode.Poco), in your OnPreUpdate method and use this in your log method. Depending on your flushmode setting, you might need to flush the child session as well.
Also, any particular reason you want to roll out your own solution? FYI, NHibernate Envers is now a pretty mature library.

EF Object entity State moves to Unchanged when attached to new Context

I have go the oddest problem, I get an object from EF and pass it back to the Business Logic for manipulation, when finished i'm trying to save the object back to the DB, when the object is passed into the following method it's EntityState is Modified but as soon as the attach line of code is run it is set to UnChanged hence the Save will not work.
Does anyone know why EF would do this?
public void Save(IEntity entity)
{
using (var context = new eDocumentEntities())
{
using (var scope = new TransactionScope())
{
if (entity.Id != 0)
context.AttachTo(entity.EntitySet, entity);
else
context.AddObject(entity.EntitySet, entity);
context.SaveChanges();
scope.Complete();
}
}
}
Since entity change tracking is encapsulated in Context, naturally the entity loses state and other tracking stuff when it gets detached from context.
Ok I found the solution to the problem here but Ii'm more interested in an explanation, why would EF do this, seems very counter productive!?
http://geekswithblogs.net/michelotti/archive/2009/11/27/attaching-modified-entities-in-ef-4.aspx

Should I trust LinqDataSource to clean up properly?

Taking my first stab as using the OnSelecting method of LinqDataSource so that I can specify a more complex query, I wrote this:
protected void CategoriesDataSource_OnSelecting(object sender, LinqDataSourceSelectEventArgs e)
{
using (DataLayerDataContext db = new DataLayerDataContext())
{
e.Result = (from feed in db.Feeds
where feed.FeedName.StartsWith("Google")
select feed.MainCategory).Distinct();
}
}
The problem, of course, is that the using clause will Dispose the DataLayerDataContext. The 'solution' is to write it without it, but I'm afraid then that the context won't be disposed of in a timely fashion, that it will leave a bunch of connections open until garbage collection runs, and so on.
I'm no expert in this area, so any comments on whether this is a real problem, or am I worried for naught?
Ah, another person burned by the unfortunate side effects of deferred loading...
protected void CategoriesDataSource_OnSelecting(object sender, LinqDataSourceSelectEventArgs e)
{
using (DataLayerDataContext db = new DataLayerDataContext())
{
e.Result = (from feed in db.Feeds
where feed.FeedName.StartsWith("Google")
select feed.MainCategory).Distinct().ToList();
// ^^^^^^^^^
}
}
This will force the DataLayerDataContext to immediately run the query and create an in-memory list that's not dependent on the context or connection. That way you get your results immediately and you can dispose the context whenever you like.
The only problem (per Steven's comment) is the lazy-loaded navigation properties; even if you force the query to evaluate, if you try to query any properties that are references to other LINQ objects (or lists of LINQ objects) they will fail to load unless you specify what properties are immediately required in the DataLoadOptions of the DataContext. See the below for an example:
protected void CategoriesDataSource_OnSelecting(object sender, LinqDataSourceSelectEventArgs e)
{
using (DataLayerDataContext db = new DataLayerDataContext())
{
DataLoadOptions loadOptions = new DataLoadOptions();
loadOptions.LoadWith<Category>(c => c.SomeReference);
loadOptions.LoadWith<Category>(c => c.SomeOtherReferences);
db.LoadOptions = loadOptions;
e.Result = (from feed in db.Feeds
where feed.FeedName.StartsWith("Google")
select feed.MainCategory).Distinct().ToList();
}
}
Once you manually specify these associations, LINQ to SQL will eagerly load them into memory when the query executes, instead of leaving them to deferred loading (which will cause an exception when the properties are used after the DataContext is disposed.)

In LINQ-SQL, wrap the DataContext is an using statement - pros cons

Can someone pitch in their opinion about pros/cons between wrapping the DataContext in an using statement or not in LINQ-SQL in terms of factors as performance, memory usage, ease of coding, right thing to do etc.
Update: In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List of q object and access entities of q, I create an object graph that is not released for GC.
DataContext with using
using (DBDataContext db = new DBDataContext())
{
var q =
from x in db.Tables
where x.Id == someId
select x;
return q.toList();
}
DataContext without using and kept alive
DBDataContext db = new DBDataContext()
var q =
from x in db.Tables
where x.Id == someId
select x;
return q.toList();
Thanks.
A DataContext can be expensive to create, relative to other things. However if you're done with it and want connections closed ASAP, this will do that, releasing any cached results from the context as well. Remember you're creating it no matter what, in this case you're just letting the garbage collector know there's more free stuff to get rid of.
DataContext is made to be a short use object, use it, get the unit of work done, get out...that's precisely what you're doing with a using.
So the advantages:
Quicker closed connections
Free memory from the dispose (Cached objects in the content)
Downside - more code? But that shouldn't be a deterrent, you're using using properly here.
Look here at the Microsoft answer: http://social.msdn.microsoft.com/Forums/en-US/adodotnetentityframework/thread/2625b105-2cff-45ad-ba29-abdd763f74fe
Short version of if you need to use using/.Dispose():
The short answer; no, you don't have to, but you should...
Well, It's an IDisposable, so I guess it's not a bad idea. The folks at MSFT have said that they made DataContexts as lightweight as possible so that you may create them with reckless abandon, so you're probably not gaining much though.....
First time DataContext will get object from DB.
Next time you fire a query to get the same object (same parameters).: You’ll see query in a profiler but your object in DataContext will not be replaced with new one from DB !!
Not to mention that behind every DataContext is identity map of all objects you are asking from DB (you don’t want to keep this around).
Entire idea of DataContext is Unit Of Work with Optimistic Concurrency.
Use it for short transaction (one submit only) and dispose.
Best way to not forget dispose is using ().
I depends on the complexity of your Data Layer. If every call is a simple single query, then each call can be wrapped in the Using like in your question and that would be fine.
If, on the other hand, your Data Layer can expect multiple sequential calls from the Business Layer, the you'd wind up repeatedly creating/disposing the DataContext for each larger sequence of calls. not ideal.
What I've done is to create my Data Layer object as IDisposible. When it's created, the DataContext is created (or really, once the first call to a method is made), and when the Data Layer object disposes, it closes and disposes the DataContext.
here's what it looks like:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Configuration;
namespace PersonnelDL
{
public class PersonnelData : IDisposable
{
#region DataContext management
/// <summary>
/// Create common datacontext for all data routines to the DB
/// </summary>
private PersonnelDBDataContext _data = null;
private PersonnelDBDataContext Data
{
get
{
if (_data == null)
{
_data = new PersonnelDBDataContext(ConfigurationManager.ConnectionStrings["PersonnelDB"].ToString());
_data.DeferredLoadingEnabled = false; // no lazy loading
//var dlo = new DataLoadOptions(); // dataload options go here
}
return _data;
}
}
/// <summary>
/// close out data context
/// </summary>
public void Dispose()
{
if (_data != null)
_data.Dispose();
}
#endregion
#region DL methods
public Person GetPersonByID(string userid)
{
return Data.Persons.FirstOrDefault(p => p.UserID.ToUpper().Equals(userid.ToUpper()));
}
public List<Person> GetPersonsByIDlist(List<string> useridlist)
{
var ulist = useridlist.Select(u => u.ToUpper().Trim()).ToList();
return Data.Persons.Where(p => ulist.Contains(p.UserID.ToUpper())).ToList();
}
// more methods...
#endregion
}
}
In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List<Table> object and access entities of q, I create an object graph that is not released for GC.
DBDataContext db = new DBDataContext()
var qs =
from x in db.Tables
where x.Id == someId
select x;
return qs.toList();
foreach(q in qs)
{
process(q);
// cannot dispose datacontext here as the 2nd iteration
// will throw datacontext already disposed exception
// while accessing the entity of q in process() function
//db.Dispose();
}
process(Table q)
{
// access entity of q which uses deferred execution
// if datacontext is already disposed, then datacontext
// already disposed exception is thrown
}
Given this example, I cannot dispose the datacontext because all the Table instances in list variable qs **share the same datacontext. After Dispose(), accessing the entity in process(Table q) throws a datacontext already disposed exception.
The ugly kluge, for me, was to remove all the entity references for q objects after the foreach loop. The better way is to of course use the using statement.
As far as my experience goes, I would say use the using statement.

Resources