Spring: weird onetomany mapping with relational table for each relation - spring

I'm trying to build a number of relations the way i've been told to. I have 6 tables...let's call them A, B, C, D, E and F.
The relation between them is always 1:N.
I've been asked to map those tables in Spring/JPA by creating a new relational table each time like so:
A + B -> AB
AB + C -> ABC
ABC + D -> ABCD
ABCD + E -> ACBDE
ACBDE + F -> ABCDEF
...where AB, ABC, ACBD, ABCDE and ACBDEF are the new relational tables I have to create.
It feels so weird to me to map tables like so, even more when the relation betweed them is not N:N, but 1:N. Also, I don't get to see the purpose ot donig that, and I came here to see if you guys could help me with both issues: Understanding why this would make sense, and how to achieve this?
I've tried myself for 2 days, but mapping the tables like they were N:N, and I always get an error like "Caused by: org.hibernate.MappingException: Foreign key (FKsxjpculqrp0noj2x8cetijcof:CEV_ambito [id_amb])) must have same number of columns as the referenced primary key (CEV_ambito [FK_tea_amb,id_amb])"
Please, any help or indications on how to do this properly would be really appreciated. Thank you all.

this is indeed a weird requirement (might make more sense if we knew about the data inside of those tables) but anyways to get 1:N you should use a foreign key inside of the next table so B has foreign key to A, C has foreignkey pointing at B and so forth.
Hibernate (jpa) by default will use a separate intermediary table for mapping which makes it look like many to many but you can customize this behavior with #JoinColumn like shown here
https://www.baeldung.com/jpa-join-column#oneToMany_mapping

Related

AzureDatabrick:Error in SQL statement: package.TreeNodeException: execute, tree: Exchange hashpartitioning

currently working with 2 temporary views A & B . while selecting records from individual views it gives results. But when creating 3rd view C with join of A & B it works but when we run any select query on 3rd view C it gives error "Error in SQL statement: package.TreeNodeException: execute, tree: Exchange hashpartitioning"
Please help whats going wrong here.
Possible reason could be, skewed join. i.e., the fields you are joining could have multiple combinations. This can happen mainly when the joining fields also possess null values in both the views then it could result in multiple null joined with multiple nulls.
It can be avoided by adding another possible field. Else join non null values separate and append null values from both sides.
If this does not solve the purpose, do share us some code snippet, we can try replicating and solve the issue.

Tableau Data Blend Performance/ Level of Detail

I have two blended sources where the source A has 500,000 rows and source B has 20,000. The problem is that when I use source A as the primary source, any computation in the dashboard takes far too long to be useful. When I use B as my primary source, performance is much improved...
...but, the level of detail I need is in source A. When source B is primary I am left with the dreadful asterisk where there is a one-to-many relationship.
Source A primary:
Event(from source B) Occurred_On(from source A)
ABC 1/1/2000
ABC 5/10/2000
XYZ 9/9/2002
XYZ 4/5/2002
Source B primary:
Event(from source B) Occurred_On(from source A)
ABC *
XYZ *
Data must be blended-- source A is a database and source B is a text file so a join is out of the question.
Patience is waning and all hope seems to be lost. Does there exist any possible way to use B as the primary while maintaining the level of detail from a field in A?
Or any other workaround that could solve this?
I will go with the below options
Cross database joins [a link] (http://www.tableau.com/about/blog/2016/7/integrate-your-data-cross-database-joins-56724)
-Create a lookup table of the text file and apply a join a link
-To improve the performance, create a data extract file of the combined dataset which will be the primary data source for the report [a link] (http://onlinehelp.tableau.com/current/pro/desktop/en-us/help.htm#extracting_data.html)
Just, 500K record should not cause the performance drag but will be a good idea to recheck the server configuration and see if there are any bottlenecks.

Matching Similar but not same columns

I have 2 tables say A and B. I need to updated state column in A based on city in B.
B has got actual lookup data
A and B has another column City.
City in A is kind of junk data like Atlanta,Atlanta Georgia,Atlanta-Georgia,Atlanta,Georgia
etc
City in B is just Atlanta.
I need to compare both the cities and update state in A
SELECT DISTINCT b.state FROM A, B WHERE INSTR(A.city ,TRIM(UPPER(B.CITY))) >0
The above select select most of them but not some of them. Can someone help me out please.
Thanks
Can you please list some examples that are left out from above SQL. Thanks.
Secondly, try soundex function. See if that works.
Cheers
V

Using the same OrderByAttribute for two attributes in one dimension

In my SSAS cube I have a dimension with attributes A and B, and I want both to be displayed in the sort order specified by a third attribute C. If I specify C as the OrderByAttribute for A and reprocess the cube, then A is sorted correctly.
If I then specify C as the OrderByAttribute for B as well and reprocess the cube, then A continues to be sorted correctly but B does not. Values of B are displayed in an order that seems arbitrary. I have triple-checked that there is no difference in the way A and B are configured.
Is there some conceptual reason why two attributes in a single dimension cannot be both sorted by the same third attribute?
I have now located the problem but still looking for a solution.
As mentioned in the comments, the dsv has tables CM and DisplayOrder between which there are two relationships -- from CM.A to DisplayOrder.primarykey and from CM.B to DisplayOrder.primarykey. SSAS constructs the attribute A using the query:
select distinct CM.A, DisplayOrder.SortOrder
from
(<named query for CM>) as CM,
(<named query for DisplayOrder>) as DisplayOrder
where CM.A = DisplayOrder.primarykey
That is correct and works fine. But when SSAS constructs the attribute B, it uses the query:
select distinct CM.B, DisplayOrder.SortOrder
from
(<named query for CM>) as CM,
(<named query for DisplayOrder>) as DisplayOrder
where CM.A = DisplayOrder.primarykey
Note that the where clause links the two tables using A rather than for B.
So in summary, when the dsv has two tables with two relationships between them, the join in the queries generated by SSAS to implement the OrderByAttribute always use one of those relationships and ignores the other.
Any idea why, or if there is a property somewhere I may have missed?

Doctrine result cache bug with LEFT JOIN ... WITH condition

I tend to find answers before I need to post a question here, but today I can't seem to find out what is wrong.
We're using Doctrine 2.1.2 in a Symfony 2 app, and in a repository we have two methods that use almost the same query.
The only difference between method A and method B is that there is a condition added to a JOIN that is common to both queries.
The problem is that Doctrine seems to use the same result cache for both queries.
When we call method A, method B uses the cache from A, and the other way around.
I have been using expireResultCache(true) and useResultCache(false), to no avail.
Here's what the queries look like:
-- method A
SELECT DISTINCT a, b, c FROM MyBundle:ObjectA a INDEX BY a.id
LEFT JOIN a.fkObjectB b
LEFT JOIN a.fkObjectC c
-- method B
SELECT DISTINCT a, b, c FROM MyBundle:ObjectA a INDEX BY a.id
LEFT JOIN a.fkObjectB b WITH b.some_field IS NULL
LEFT JOIN a.fkObjectC c
When I use getSQL(), I see that they result in different queries as expected. The generated queries, when run independantly in database, do generate different results.
This leads me to believe that it may be an annoying result cache bug, where Doctrine does not cache the conditions for JOINs, but only the table names.
Is this a bug, or is there something I can do?
EDIT Still happening in Doctrine 2.1.6.
I think the problem you have is fixed in Doctrine 2.2. I have similar problem related to result cache and here is my question&answers.
Just to expand on michel v's comment, Doctrine 2 is fetching the same object instance both times via the identity map pattern.
Calling:
EntityManager#clear()
clears the identity map and forces the EntityManager to fetch a fresh copy of the object from the database.

Resources