Getting DataContext error while saving form - linq

I get this error when opening one specific form. The rest is working fine and I have no clue why this one isn't.
Error: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported.
I get the error at _oDBConnection when I try to save. When I watch _oDBConnection while running through the code, it does not exist. Even when I open the main-window it does not exist. So this form is where the DataContext is built for the very first time.
Every class inherits from clsBase where the DataContext is built.
My collegue is the professional one who built it all. I am just expanding and using it (learned it by doing it). But now I'm stuck and he is on holiday. So keep it simple :-)
What can it be?
clsPermanency
namespace Reservation
{
class clsPermanency : clsBase
{
private tblPermanency _oPermanency;
public tblPermanency PermanencyData
{
get { return _oPermanency; }
set { _oPermanency = value; }
}
public clsPermanency()
: base()
{
_oPermanency = new tblPermanency();
}
public clsPermanency(int iID)
: this()
{
_oPermanency = (from oPermanencyData in _oDBConnection.tblPermanencies
where oPermanencyData.ID == iID
select oPermanencyData).First();
if (_oPermanency == null)
throw new Exception("Permanentie niet gevonden");
}
public void save()
{
if (_oPermanency.ID == 0)
{
_oDBConnection.tblPermanencies.InsertOnSubmit(_oPermanency);
}
_oDBConnection.SubmitChanges();
}
}
}
clsBase
public class clsBase
{
protected DBReservationDataContext _oDBConnection;
protected int _iID;
public int ID
{
get { return _iID; }
}
public DBReservationDataContext DBConnection
{
get { return _oDBConnection; }
}
public clsBase()
{
_oDBConnection = new DBReservationDataContext();
}
}

Not a direct answer, but this is really bad design, sorry.
Issues:
One context instance per class instance. Pretty incredible. How are you going to manage units of work and transactions? And what about memory consumption and performance?
Indirection: every entity instance (prefixed o) is wrapped in a cls class. What a hassle to make classes cooperate, if necessary, or to access their properties.
DRY: far from it. Does each clsBase derivative have the same methods as clsPermanency?
Constructors: you always have to call the base constructor. The constructor with int iID always causes a redundant new object to be created, which will certainly be a noticeable performance hit when dealing with larger numbers. A minor change in constructor logic may cause the sequence of constructor invocations to change. (Nested and inherited constructors are always tricky).
Exception handling: you need a try-catch everywhere where classes are created. (BTW: First() will throw its own exception if the record is not there).
Finally, not a real issue, but class and variable name prefixes are sooo 19xx.
What to do?
I don't think you can change your colleague's design in his absence. But I'd really talk to him about it in due time. Just study some linq-to-sql examples out there to pick up some regular patterns.
The exception indicates that somewhere between fetching the _oPermanency instance (in the Id-d constructor) and saving it a new _oDBConnection is created. The code as shown does not reveal how this could happen, but I assume there is more code than this. When you debug and check GetHashCode() of _oDBConnection instances you should be able to find where it happens.

Related

ArrayListModel will not sync with JList

I have combed through SO, and have found many questions on the topic of my problem but do not answer it.
I am setting up an MVC, I have set up things correctly to best of my knowledge but I cannot get the Controller to show in my view. I am working on an assignment that essentially is a program for a Video Rental Store.
First, In a class called RentalStoreGUI, I set up my panels and everything looks good when I run.
RentalStoreEngine model = new RentalStoreEngine();
JList<DVD> list = new JList<DVD>();
list.setModel(model);
list.setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
list.setVisible(true);
list.setSelectedIndex(0);
jScrollPane = new JScrollPane(list);
add(jScrollPane, BorderLayout.CENTER);
add(buttonPanel, BorderLayout.SOUTH);
As you can see I set my model for the list based on another class called RentalStoreEngine() and it implements AbstractListModel. The Abstract List model is functioning when I do class specific testing and all of the necessary methods are implemented. Here is an example of my add method from that class:
public void add(DVD d){
if (d != null){
rentals.add(d);//rentals is an arrayList<DVD> instantiated earlier
fireIntervalAdded(this, rentals.size() - 1, rentals.size() - 1);
}
}
Here is the actionPerformed method, it runs DVD_Dialog which simply gets some input from the user and creates a new DVD object from that.
public void actionPerformed(ActionEvent event) {
if(event.getSource() == rentDVD){
DVD_Dialog = new RentDVDDialog(this, null);
DVD_Dialog.clear();
DVD_Dialog.setVisible(true);
dvd = new DVD(DVD_Dialog.getTitleText(),DVD_Dialog.getRenterText(),
DVD_Dialog.getRentedOnText(), DVD_Dialog.getDueBackText());
if(DVD_Dialog.closeStatus() == true){
model.add(dvd);
}
}
Eclipse gives me no errors, until I run it. I then receive a nullPointerException at the line model.add(dvd); Based on all my research the list.setModel(model) and the fireIntervalAdded method line should update the Jlist on its own. But it does not. And as I said, class specific testing for both the GUI and the Model are producing the desired results, but when it comes to integrating them I am at a loss.

Hibernate flush optimization using `hibernate.ejb.use_class_enhancer`

I am trying to use the hibernate feature that enhances the flush performance without making code changes. I came across the option hibernate.ejb.use_class_enhancer.
I made the following changes.
1) enabled the property hibernate.ejb.use_class_enhancer to true.
Build failed with error 'Cannot apply class transformer without LoadTimeWeaver specified'
2) I added
context:load-time-weaver to the context files.
Build failed with the following error :
Specify a custom LoadTimeWeaver or start your Java virtual machine with Spring’s agent: -javaagent:spring-agent.jar
3) I added the following to the maven-surefire-plugin
javaagent:${settings.localRepository}/org/springframework/spring-
agent/2.5.6.SEC03/spring-agent-2.5.6.SEC03.jar
the build is successful now.
We have an interceptor that tracks the number of entities being flushed in a transaction.
After I did the above changes, I was expecting that number to come down significantly, but, they did not.
My question is:
Are the above changes correct/enough for getting the 'entity flush optimization'?
How to verify that the application is indeed using the optimization?
Edit:
After debugging, I found the following.
There is a time when our DO class is submitted for transformation, but, the logic that figures out whether a given class is supposed to be transformed is not handling the class names correctly (in my case), because of that, the DO class goes without being transformed.
Is there a way I can pass my logic instead ?
the relevant code is below.
The return copyEntities.contains( className ); is coming out false for the following inputs.
copyEntities contains list of strings "com.x.y.abcDO", "com.x.y.asxDO" where are the className is "com.x.y.abcDO_$$_jvsteb8_48"
public InterceptFieldClassFileTransformer(List<String> entities) {
final List<String> copyEntities = new ArrayList<String>( entities.size() );
copyEntities.addAll( entities );
classTransformer = Environment.getBytecodeProvider().getTransformer(
//TODO change it to a static class to make it faster?
new ClassFilter() {
public boolean shouldInstrumentClass(String clas sName) {
return copyEntities.contains( className );
}
},
//TODO change it to a static class to make it faster?
new FieldFilter() {
#Override
public boolean shouldInstrumentField(String clas sName, String fieldName) {
return true;
}
#Override
public boolean shouldTransformFieldAccess(
String transformingClassName, String fieldOwnerClassName, String fieldName
) {
return true;
}
}
);
}
edited on June 15th
I updated my project to use Spring 4.0.5.RELEASE and hibernate to 4.3.5.Final
I started using org.hibernate.jpa.HibernatePersistenceProvider
and
org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver
and
hibernate.ejb.use_class_enhancer=true
with these changes, I am debugging the flush behavior. I have a question in this code block .
private boolean isUnequivocallyNonDirty(Object entity) {
if(entity instanceof SelfDirtinessTracker)
return ((SelfDirtinessTracker) entity).$$_hibernate_hasDirtyAttributes();
final CustomEntityDirtinessStrategy customEntityDirtinessStrategy =
persistenceContext.getSession().getFactory().getCustomEntityDirtinessStrategy();
if ( customEntityDirtinessStrategy.canDirtyCheck( entity, getPersister(), (Session) persistenceContext.getSession() ) ) {
return ! customEntityDirtinessStrategy.isDirty( entity, getPersister(), (Session) persistenceContext.getSession() );
}
if ( getPersister().hasMutableProperties() ) {
return false;
}
if ( getPersister().getInstrumentationMetadata().isInstrumented() ) {
// the entity must be instrumented (otherwise we cant check dirty flag) and the dirty flag is false
return ! getPersister().getInstrumentationMetadata().extractInterceptor( entity ).isDirty();
}
return false;
}
In my case, the flow is returning false because of persister saying yes for hasMutableProperties. I think the interceptor did not have a chance to answer at all.
Is it not that the bytecode transformer cause an interceptor here? Or the bytecode transform should make the entity a SelfDirtinessTracker?
Can anyone explain, what is the behavior I should expect here from the bytecode transformation here.

Entity Framework 4.3.1 add-migration error: "model backing the context has changed"

I'm getting an error when trying to run the EF 4.3.1 add-migrations command:
"The model backing the ... context has changed since the database was created".
Here's one sequence that gets the error (although I've tried probably a dozen variants which also all fail)...
1) Start with a database that was created by EF Code First (ie, already contains a _MigrationHistory table with only the InitialCreate row).
2) The app's code data model and database are in-sync at this point (the database was created by CF when the app was started).
3) Because I have four DBContexts in my "Services" project, I didn't run 'enable-migrations' command (it doesn't handle multipe contexts). Instead, I manually created the Migrations folder in the Services project and the Configuration.cs file (included at end of this post). [I think I read this in a post somewhere]
4) With the database not yet changed, and the app stopped, I use the VS EDM editor to make a trivial change to my data model (add one property to an existing entity), and have it generate the new classes (but not modify the database, obviously). I then rebuild the solution and all looks OK (but don't delete the database or restart the app, of course).
5) I run the following PMC command (where "App" is the name of one of the classes in Configuration.cs):
PM> add-migration App_AddTrivial -conf App -project Services -startup Services -verbose
... which fails with the "The model ... has changed. Consider using Code First Migrations..." error.
What am I doing wrong? And does anyone else see the irony in the tool telling me to use what I'm already trying to use ;-)
What are the correct steps for setting-up a solution starting with a database that was created by EF CF? I've seen posts saying to run an initial migration with -ignorechanges, but I've tried that and it doesn't help. Actually, I've spent all DAY testing various permutations, and nothing works!
I must be doing something really stupid, but I don't know what!
Thanks,
DadCat
Configuration.cs:
namespace mynamespace
{
internal sealed class App : DbMigrationsConfiguration
{
public App()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.App.Repository.Migrations";
}
protected override void Seed(.Services.App.Repository.ModelContainer context)
{
}
}
internal sealed class Catalog : DbMigrationsConfiguration<Services.Catalog.Repository.ModelContainer>
{
public Catalog()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.Catalog.Repository.Migrations";
}
protected override void Seed(Services.Catalog.Repository.ModelContainer context)
{
}
}
internal sealed class Portfolio : DbMigrationsConfiguration<Services.PortfolioManagement.Repository.ModelContainer>
{
public Portfolio()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.PortfolioManagement.Repository.Migrations";
}
protected override void Seed(Services.PortfolioManagement.Repository.ModelContainer context)
{
}
}
internal sealed class Scheduler : DbMigrationsConfiguration<.Services.Scheduler.Repository.ModelContainer>
{
public Scheduler()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.Scheduler.Repository.Migrations";
}
protected override void Seed(Services.Scheduler.Repository.ModelContainer context)
{
}
}
}
When using EF Migrations you should have one data context per database. I know that it can grow really large, but by trying to split it you will run into several problems. One is the migration issue that you are experiencing. Later on you will probably be facing problems when trying to make queries joining tables from the different contexts. Don't go that way, it's against how EF is designed.

Why is FxCop raising the error "Types that own disposable fields should be disposable" on a class with no disposable fields?

I have a LINQ object with an additional method added to it. The class has no disposable properties or methods, but FxCop is raising the error "Types that own disposable fields should be disposable" and referencing that class.
I've reduced the code this far and still receive the error:
partial class WikiPage
{
public PagePermissionSet GetUserPermissions(Guid? userId) {
using (WikiTomeDataContext context = new WikiTomeDataContext()) {
var permissions =
from wiki in context.Wikis
from pageTag in context.VirtualWikiPageTags
select new {};
return null;
}
}
}
However, if I remove EITHER of the from clauses, FxCop stops giving the error:
partial class WikiPage
{
public PagePermissionSet GetUserPermissions(Guid? userId) {
using (WikiTomeDataContext context = new WikiTomeDataContext()) {
var permissions =
from pageTag in context.VirtualWikiPageTags
select new {};
return null;
}
}
}
Or
partial class WikiPage
{
public PagePermissionSet GetUserPermissions(Guid? userId) {
using (WikiTomeDataContext context = new WikiTomeDataContext()) {
var permissions =
from wiki in context.Wikis
select new {};
return null;
}
}
}
PagePermissionSet is not disposable.
Is this a false positive? Or is the LINQ code somehow generating a disposable field on the class? If it isn't a false positive, FxCop is recommending that I implement the IDisposable interface, but what would I do in the Dispose method?
EDIT:
The full FxCop error is:
"Implement IDisposable on 'WikiPage' because it
creates members of the following IDisposable types:
'WikiTomeDataContext'. If 'WikiPage' has previously
shipped, adding new members that implement IDisposable
to this type is considered a breaking change to existing
consumers."
Edit 2:
This is the disassembled code that raises the error:
public PagePermissionSet GetUserPermissions(Guid? userId)
{
using (WikiTomeDataContext context = new WikiTomeDataContext())
{
ParameterExpression CS$0$0001;
ParameterExpression CS$0$0003;
var permissions = context.Wikis.SelectMany(Expression.Lambda<Func<Wiki, IEnumerable<VirtualWikiPageTag>>>(Expression.Property(Expression.Constant(context), (MethodInfo) methodof(WikiTomeDataContext.get_VirtualWikiPageTags)), new ParameterExpression[] { CS$0$0001 = Expression.Parameter(typeof(Wiki), "wiki") }), Expression.Lambda(Expression.New((ConstructorInfo) methodof(<>f__AnonymousType8..ctor), new Expression[0], new MethodInfo[0]), new ParameterExpression[] { CS$0$0001 = Expression.Parameter(typeof(Wiki), "wiki"), CS$0$0003 = Expression.Parameter(typeof(VirtualWikiPageTag), "pageTag") }));
return null;
}
}
Edit 3:
There does appear to be a closure class containing a reference to the DataContext. Here is its disassembled code:
[CompilerGenerated]
private sealed class <>c__DisplayClass1
{
// Fields
public WikiTomeDataContext context;
// Methods
public <>c__DisplayClass1();
}
My guess is that the two From clauses generate a call to SelectMany with a closure on your data context. The instance of the closure has a field to the datacontext which is causes the FxCop warning. This is nothing to worry about.
There's only one instance of your datacontext, which you clean up via the using block. Because the closure doesn't have a finalizer there's no performance or saftey implication here in the FxCop warning.
I noticed that this is a partial class. Have you checked the other implementation file for the class and see if it has an IDisposable member that is not being disposed?
I don't think the generated closure is at fault here. Closures are generated with certain attributes that should cause FxCop to ignore warnings like this.
EDIT
Further investigation by the OP showed this to be an issue with an IDisposable field being lifted into a closure.
Unfortunately there isn't a whole lot you can do about this. There is no way to make the closure implement IDisposable. Event if you could there is no way to call IDisposable on the closure instance.
The best way to approach this problem is to rewrite your code in such a way that a disposable value does not get captured in the closure. Disposable fields should always be disposed when they are finished and capturing it in a closure prevents you from doing this.
If you're returning a LINQ query from your method, consumers will iterate over the results using foreach.
When a consumer finishes a foreach loop, it internally calls dispose on the IEnumerable source (in this case, your LINQ query). This will dispose the WikiTomeDataContext.
However, if a consumer made a call to method returning a LINQ query but never iterated over the results, it would appear that enumerable would never be disposed (that is, until the garbage collector cleaned up the object). This would lead to your WikiTomeDataContext not being disposed until garbage collection.
One way you might be able to get around this problem is by calling .ToArray on the result of your LINQ query, call dispose on your context, then return the array.
Your code that gives the error uses WikiDataContext.
Your two examples that do not give an error use WikiTomeDataContext.
Maybe there is some difference between these two that is causing the error.

Enterprise Library Validation Block - Should validation be placed on class or interface?

I am not sure where the best place to put validation (using the Enterprise Library Validation Block) is? Should it be on the class or on the interface?
Things that may effect it
Validation rules would not be changed in classes which inherit from the interface.
Validation rules would not be changed in classes which inherit from the class.
Inheritance will occur from the class in most cases - I suspect some fringe cases to inherit from the interface (but I would try and avoid it).
The interface main use is for DI which will be done with the Unity block.
The way you are trying to use the Validation Block with DI, I dont think its a problem if you set the attributes at interface level. Also, I dont think it should create problems in the inheritance chain. However, I have mostly seen this block used at class level, with an intent to keep interfaces not over specify things. IMO i dont see a big threat in doing this.
Be very careful here, your test is too simple.
This will not work as you expect for SelfValidation Validators or Class Validators, only for the simple property validators like you have there.
Also, if you are using the PropertyProxyValidator in an ASP.NET page, iI don;t believe it will work either, because it only looks a field validators, not inherited/implemented validators...
Yes big holes in the VAB if you ask me..
For the sake of completeness I decided to write a small test to make sure it would work as expected and it does, I'm just posting it here in case anyone else wants it in future.
using System;
using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
ISpike spike = new Spike();
spike.Name = "A really long name that will fail.";
ValidationResults r = Validation.Validate<ISpike>(spike);
if (!r.IsValid)
{
throw new InvalidOperationException("Validation error found.");
}
}
}
public class Spike : ConsoleApplication1.ISpike
{
public string Name { get; set; }
}
interface ISpike
{
[StringLengthValidator(2, 5)]
string Name { get; set; }
}
}
What version of Enterprise Library are you using for your code example? I tried it using Enterprise Library 5.0, but it didn't work.
I tracked it down to the following section of code w/in the EL5.0 source code:
[namespace Microsoft.Practices.EnterpriseLibrary.Validation]
[public static class Validation]
public static ValidationResults Validate<T>(T target, ValidationSpecificationSource source)
{
Type targetType = target != null ? target.GetType() : typeof(T);
Validator validator = ValidationFactory.CreateValidator(targetType, source);
return validator.Validate(target);
}
If the target object is defined, then target.GetType() will return the most specific class definition, NOT the interface definition.
My workaround is to replace your line:
ValidationResults r = Validation.Validate<ISpike>(spike);
With:
ValidationResults r ValidationFactory.CreateValidator<ISpike>().Validate(spike);
This got it working for me.

Resources