Have a doctrine criteria that appears to be querying with mixed up parameters unless the association is first eagerly loaded.
The setup is quite simple. I have four entities: User, Business, UserBusiness, and UserBusinessMeta.
Each User can belong to one or more Businesses. Each UserBusiness can also have a list of Meta key/values for the given business. So,
User:
id: 3
Business:
id: 2
UserBusiness:
user_id: 3
business_id: 2
UserBusinessMeta (in the db there is also a unique constraint on user_id, business_id, key):
user_id: 3
business_id: 2
key: 'foo'
value: 'bar'
In my UserBusiness entity I have a method called: getFoo which has a criteria as shown below:
$criteria = Criteria::create()
->where(Criteria::expr()->eq('key', 'foo'))
->setMaxResults(1);
return $this->meta->matching($criteria)->first();
For some reason, when the SQL is created for this criteria, it reverses the business_id and user_id values when setting params so that it's looking for a business_id of 3 and a user_id of 2!
SELECT
t0.user_id AS user_id_1,
t0.business_id AS business_id_2,
t0.`key` AS key_3,
t0.`value` AS value_4,
t0.id AS id_5,
t0.user_id AS user_id_6,
t0.business_id AS business_id_7
FROM user_business_meta t0
WHERE (t0.`key` = ? AND t0.user_id = ? AND t0.business_id = ?) LIMIT 1
array (size=3)
0 => string 'foo' (length=11)
1 => int 2
2 => int 3
array (size=3)
0 => string 'string' (length=6)
1 => string 'integer' (length=7)
2 => string 'integer' (length=7)
However, if I set the UserBusinessMeta association fetch to be EAGER, then the variables are correctly loaded into memory and the above criteria -- not having to use SQL -- returns the right result.
I use XML mapping and there's really nothing special going on in them. The names of the columns and fields are correct as are the associations:
Business -> one-to-many UserBusiness
UserBusiness -> one to many UserBusinessMeta
The mapping for the association here is:
<one-to-many target-entity="UserBusinessMeta" mapped-by="UserBusiness" field="meta" orphan-removal="true">
<cascade>
<cascade-persist/>
<cascade-remove/>
</cascade>
</one-to-many>
And the user_id and business_id columns are mapped:
<id name="businessId" column="business_id" type="integer" />
<id name="userId" column="user_id" type="integer" />
In UserBusinessMeta the many-to-one association back to UserBusiness is defined as follows:
<many-to-one target-entity="UserBusiness" field="UserBusiness" inversed-by="meta">
<join-columns>
<join-column name="user_id" referenced-column-name="user_id" />
<join-column name="business_id" referenced-column-name="business_id" />
</join-columns>
</many-to-one>
Finally, the UserBusinessMeta entity also has the two columns for business_id and user_id mapped as so:
<field name="userId" column="user_id" type="integer" />
<field name="businessId" type="integer" column="business_id" />
It turns out the issue was due to having two ID's in the UserBusiness entity and one of them needing to be marked as an associate key. the following update fixed it:
<id name="business" column="business_id" type="integer" association-key="true" />
Related
I have a model as follow:
class ModelA(models.Model):
id = models.CharField()
class ModelB(models.Model):
name = models.CharField()
base = models.Boolean(default=False)
modela = models.ForeignKey(ModelA)
In ModelB we have records as below:
id name base modela
------------------------------------------------
1 solution_base True X2ZQ
2 solution_x False X2ZQ
3 solution_base True ALSB
4 solution_z False ALSB
5 solution_base True 5YET
6 solution_c False 5YET
7 solution_base True PIAT
... ... ... ...
As you can see each record has a base copy of itself that can be distinguished by unique modela foreignkey. All I need, is that by a given normal solution id(example solution_x) I need to query its base equivalent base object (where modela ids are the same). Here what I did so far:
modela_id = ModelB.objects.filter(id=modelb_pk).select_related('modela_id').values_list('modela_id', flat=True)
modelb_solution_base_id = ModelB.objects.filter(modela_id=modela_id[0]).filter(base=True).select_related('modela_id').values_list('id', flat=True)
I guess there should be a solution to merge these two using prefetch_related(Prefetch()) but I have no idea how to use that. Any help will be highly appreciated.
I think you're making this a bit more complicated than necessary -- the Django ORM handles much of this for you, thanks to that foreign-key relationship. Given an ID for ModelB, the ID for the other ModelB with the same ModelA but where base=True is just:
ModelB.objects.get(id=modelb_pk).modela.modelb_set.get(base=True).id
Why does this work?
Because ModelB has a many-to-one relationship with ModelA, we can call .modela on an instance of ModelB to get the corresponding ModelA.
Conversely, given an instance of ModelA, calling .modelb_set returns all ModelB records associated with ModelA.
We can then call .get/.filter on modelb_set just like we would with ModelB.objects.
When trying to retrieve an entity from Dynamics CRM using FetchXML one of the attributes appears to be missing.
<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="true">
<entity name="sb_eventbooking">
<attribute name="sb_name" />
<attribute name="sb_bookeridname" /> < problem here
<attribute name="createdon" />
<atrribute ........
There are 18 attributes in the FetchXML file but when running the application only 17 are available:
And sb_bookeridname is missing. If I go into the FetchXML file and enter an attribue that I know doesn't exist then I get an error:
'sb_eventbooking' entity doesn't contain attribute with Name = 'fakeattribute'.
So the application accepts there is an attribue called 'sb_bookeridname' but I cannot get a value from it. I know there can be issues with columns with null values but other attributes don't seem to have this problem. I do use this check on all attributes and get values for all the other attributes:
if (entity.Attributes.Contains("sb_bookeridname") && entity.GetAttributeValue<String>("sb_bookeridname") != null)
{
booking.bookeridname = entity.GetAttributeValue<String>("sb_bookeridname");
}
Edit 1:
I believe you have a lookup field with schema name: sb_bookerid. When we create a lookup field, CRM automatically creates a column in the table to store the text value corresponding to lookup. So when we create a lookup field sb_bookerid, then CRM will automatically create a column in the sb_eventbooking entity by the name sb_bookeridname.
This is the reason you do not receive an error on executing FetchXML query because a column with the name exists but CRM restricts from showing its value. So if you want to retrieve the value in sb_bookerid field, please use following -
if(entity.Contains("sb_bookerid"))
{
bookeridname=((EntityReference)entity.Attributes["sb_bookerid"]).Name
}
Hope it helps.
Here is cleaner way:
bookeridname = entity.GetAttributeValue<EntityReference>("sb_bookerid")?.Name;
I have this object project.task that has many situations project.task.situation . In project.task.situation model I want to create a sequence so I added this record:
<record id="sequence_project_task_situation_seq" model="ir.sequence">
<field name="name">Project Task Situation Sequence</field>
<field name="code">project.task.situation</field>
<field name="prefix">Situation N°</field>
<field eval="1" name="number_next"/>
<field eval="1" name="number_increment"/>
<field eval="False" name="company_id"/>
</record>
In python code I added this:
name = fields.Char(string='Situation Number', readonly=True)
#api.model
def create(self, vals):
seq = self.env['ir.sequence'].next_by_code('project.task.situation') or '/'
vals['name'] = seq
return super(ProjectTaskSituation, self).create(vals)
what I want is each task has its own situation sequence. For example for task1 I create 2 situations so I have Situation N°1 and Situation N°2 after that I want to create situations for task2 so I'll get Situation N°3 and Situation N°4 . Which is not good cause I want for each task to start count sequence from the start. Is this possible? How is that?
First, a simple example to describe my problem.
Model
public class User
{
public virtual String UserID { get; set; }
public virtual String UserName { get; set; }
public virtual DateTime LastLoginTime { get; set; }
}
Mapping
<id name="UserID" type="AnsiString">
<column name="p_UserID_vc" length="20"></column>
<generator class="assigned"/>
</id>
<property name="UserName" column="UserName_vc" type="AnsiString">
<property name="LastLoginTime" column="LastLoginTime_d" type="DateTime">
table
create table T_User
(
p_userid_vc VARCHAR2(20) not null,
username_vc VARCHAR2(50),
lastlogintime_d DATE,
)
Now ,there are one million users in this table. I create a oracle index in LastLoginTime. I use query like this:
var list = Responsity<User>.Where(q => q.LastLoginTime <= DateTime.Now &&
q.LastLoginTime >= DateTime.Now.AddDays(-7));
I use Nhibernate Profile to watchout the real sql string:
select t.p_UserID_vc
from T_User t
where t.lastlogintime_d >= TIMESTAMP '2012-03-19 16:58:32.00' /* :p1 */
and t.lastlogintime_d <= TIMESTAMP '2012-03-26 16:58:32.00' /* :p2 */
It didn't use the index. I think it should use 'to_date' ,so that it could use the index. How to config the mapping file?
There are a few reasons why it might not be using your index:
The datatype of LastLoginTime is a DATE, but the parameters are TIMESTAMPs, so it might be implicitly converting the column to a timestamp, which would mean it cannot use the index.
The Cost-Based-Optimizer (CBO) might be using statistics which indicate that using the index would be less efficient than not using it. For example, there might be very few rows in the table, or a histogram might tell the CBO that a large number of rows match the date range you're querying on. It's not uncommon for full table scans to outperform queries that use indexes.
Perhaps the statistics on the table are out-of-date, causing the CBO to make inaccurate estimates.
Do an explain plan on your query to determine what the cause is.
Note: the plan for a query that uses literal values (e.g. TIMESTAMP '...') could very well be different to that for a query that uses bind variables (e.g. p1 and p2). Run the explain plan for the query that is actually being executed.
just in case if anyone is still looking for an answer, hope this helps
the fix is to configure it with proper dialect, in my case it is NHibernate.Dialect.Oracle10gDialect
I have to Classes (UserSet and User) which have a many-to-many relation (i.e. every user can belong to some UserSets). In the database there is a UserSet Table, a User Table and an 'in between' Table (UsersToUserSets).
Now, if I want to remove a user from a UserSet by doing
userSet.getUsers().remove(user);
session.flush()
Hibernate first fetches all users belonging to userSet and then removes the one user and updates the 'inbetween' table.
As there may be thousands of users belonging to a UserSet this is very bad for the performance. Is there a way to avoid that all of the users are fetched?
The interesting parts of the mapping files look like this:
<class name="...UserSet">
...
<set name="users" table="T_UM_USERS2USER_SETS">
<key column="FK_USER_SET_ID" />
<many-to-many column="FK_USER_ID"
class="...User" />
</set>
...
</class>
<class name="...User">
...
<set name="userSets" table="T_UM_USERS2USER_SETS" inverse="true">
<key column="FK_USER_ID" />
<many-to-many column="FK_USER_SET_ID" class="...UserSet" />
</set>
</class>
All users for a particular UserSet are fetched because you're calling userSet.getUsers().remove(user). Performing any operation on a lazy collection causes collection to be fetched. What you can do is:
1) If userSets cardinality is lower than that of users (e.g. given user would only belong to few userSets) you can switch the inverse end of this relationship and invoke user.getUserSets().remove(userSet) - I'm assuming here you want to remove the association only and not the actual entity.
OR
2) You can define a named SQL query to delete the association row from T_UM_USERS2USER_SETS table and execute it.