Symfony2/Doctrine DQL QueryException - doctrine

So I'm attempting to do a query on a table that holds two foreign keys. This table basically sets a specific userID to be an "admin" of a specific zoneID in our application my DQL is this:
'SELECT z, zone FROM MLBPBeerBundle:TableZoneAdmins z JOIN z.zoneAdminsZID WHERE z.zoneAdminsPID = '.$userID .'AND zone.zoneId = z.zoneAdminsZID'
I am getting this error:
An exception has been thrown during the rendering of a template ("[Syntax Error] line 0, col 80: Error: Expected end of string, got 'z'") in "MLBPBeerBundle:Profile:index.html.twig" at line 10.
From looking at the query the part in question "line 0, col 80 is the beginning of z.zoneAdminsPID which means at least in my interpretation of the error that it expects the string to end right after the WHERE which makes no sense
What makes this even more confusing is I have already successfully used a similar query to get a team name out of our games table which has a foreign key to the teams id:
'SELECT g, team1 FROM MLBPBeerBundle:TableGame g JOIN g.gameWinnertid WHERE g.gameZoneid = '.$zoneId .'AND team1.id = g.gameWinnertid'
Thank you for any help you can provide this has left me stumped as to me I don't really see a difference in how to two queries operate other than the fact they are grabbing different data

I was able to fix this by not including the JOIN, Symfony is more automagical than I thought in the fact that
SELECT z FROM MLBPBeerBundle:TableZoneAdmins z WHERE z.zoneAdminsPID = '.$userID
From here the zoneAdminsZID was a proper zone Entity, now I believe this is lazily loading this entity aka not firing the query till I "derefence" the ZID but for us this works just fine

Related

Oracle Spatial Index on a View of multiple tables

I have three big tables with same structure (BATHY_SHC_1, BATHY_SHC_2 and BATHY_SHC_3), each with a SDO_GEOMETRY column “POINT_PP”, the spatial index of each is VALID.
I have made a view on these tables that includes this geometry column (V_BATHY_SHC).
I can make spatial request on the view to find all point within a rectangle like this with correct results:
SELECT PT_ID, POINT_PP from V_BATHY_SHC
WHERE SDO_RELATE(POINT_PP, MDSYS.SDO_GEOMETRY(2003, 32618, null, MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,3),MDSYS.SDO_ORDINATE_ARRAY(635267,5037808, 635277,5037818)), 'mask=anyinteract') = 'TRUE';
I have upload a polygon MASK in a table MNE_MASK, added a line in metadata and created a spatial index (as usual). It has a valid spatial index. The geometry is in the same SRID (32618).
Then I want to get all points from the view that are within the polygon from the MNE_MASK table. If I made the query on one of the table, I get correct results:
SELECT A.PT_ID, A.POINT_PP
FROM BATHY_SHC_1 A, MNE_MASK B
WHERE B.MNE_ID = 1
AND SDO_RELATE(A.POINT_PP, B.MASK, 'mask=ANYINTERACT ') = 'TRUE';
But if I made it on the view like this:
SELECT A.PT_ID, A.POINT_PP
FROM V_BATHY_SHC A, MNE_MASK B
WHERE B.MNE_ID = 1
AND SDO_RELATE(A.POINT_PP, B.MASK, 'mask=ANYINTERACT ') = 'TRUE';
I got this error:
ORA-13226: interface not supported without a spatial index
ORA-06512: at "MDSYS.MD", line 1723
ORA-06512: at "MDSYS.MDERR", line 8
ORA-06512: at "MDSYS.SDO_3GL", line 94
In the past, I’ve always made queries on spatial index of views of multiple tables without problem.
I can make spatial query on both without issue but I can’t make a SDO_RELATE between them…
Why is this one any different?
Thanks a lot for your insight and help!
Edit:
I've found a quick workaround but it doesn't explain why.
If I swap the two first params in the SDO_RELATE function, the request works.
SELECT A.PT_ID, A.POINT_PP
FROM V_BATHY_SHC A, MNE_MASK B
WHERE B.MNE_ID = 1
AND SDO_RELATE(B.MASK, A.POINT_PP, 'mask=ANYINTERACT ') = 'TRUE';
In SDO_RELATE, the first parameter is a geometry column in a table, whereas the second is a single geometry. This said, I wonder why your original query works. There you have the view column as the first parameter, just like in the failing query.

How to design querying multiple tags on analytics database

I would like to store user purchase custom tags on each transaction, example if user bought shoes then tags are "SPORTS", "NIKE", SHOES, COLOUR_BLACK, SIZE_12,..
These tags are that seller interested in querying back to understand the sales.
My idea is when ever new tag comes in create new code(something like hashcode but sequential) for that tag, and code starts from "a-z" 26 letters then "aa, ab, ac...zz" goes on. Now keep all the tags given for in one transaction in the one column called tag (varchar) by separating with "|".
Let us assume mapping is (at application level)
"SPORTS" = a
"TENNIS" = b
"CRICKET" = c
...
...
"NIKE" = z //Brands company
"ADIDAS" = aa
"WOODLAND" = ab
...
...
SHOES = ay
...
...
COLOUR_BLACK = bc
COLOUR_RED = bd
COLOUR_BLUE = be
...
SIZE_12 = cq
...
So storing the above purchase transaction, tag will be like tag="|a|z|ay|bc|cq|" And now allowing seller to search number of SHOES sold by adding WHERE condition tag LIKE %|ay|%. Now the problem is i cannot use index (sort key in redshift db) for "LIKE starts with %". So how to solve this issue, since i might have 100 millions of records? dont want full table scan..
any solution to fix this?
Update_1:
I have not followed bridge table concept (cross-reference table) since I want to perform group by on the results after searching the specified tags. My solution will give only one row when two tags matched in a single transaction, but bridge table will give me two rows? then my sum() will be doubled.
I got suggestion like below
EXISTS (SELECT 1 FROM transaction_tag WHERE tag_id = 'zz' and trans_id
= tr.trans_id) in the WHERE clause once for each tag (note: assumes tr is an alias to the transaction table in the surrounding query)
I have not followed this; since i have to perform AND and OR condition on the tags, example ("SPORTS" AND "ADIDAS") ---- "SHOE" AND ("NIKE" OR "ADIDAS")
Update_2:
I have not followed bitfield, since dont know redshift has this support also I assuming if my system will be going to have minimum of 3500 tags, and allocating one bit for each; which results in 437 bytes for each transaction, though there will be only max of 5 tags can be given for a transaction. Any optimisation here?
Solution_1:
I have thought of adding min (SMALL_INT) and max value (SMALL_INT) along with tags column, and apply index on that.
so something like this
"SPORTS" = a = 1
"TENNIS" = b = 2
"CRICKET" = c = 3
...
...
"NIKE" = z = 26
"ADIDAS" = aa = 27
So my column values are
`tag="|a|z|ay|bc|cq|"` //sorted?
`minTag=1`
`maxTag=95` //for cq
And query for searching shoe(ay=51) is
maxTag <= 51 AND tag LIKE %|ay|%
And query for searching shoe(ay=51) AND SIZE_12 (cq=95) is
minTag >= 51 AND maxTag <= 95 AND tag LIKE %|ay|%|cq|%
Will this give any benefit? Kindly suggest any alternatives.
You can implement auto-tagging while the files get loaded to S3. Tagging at the DB level is too-late in the process. Tedious and involves lot of hard-coding
While loading to S3 tag it using the AWS s3API
example below
aws s3api put-object-tagging --bucket --key --tagging "TagSet=[{Key=Addidas,Value=AY}]"
capture tags dynamically by sending and as a parameter
2.load the tags to dynamodb as a metadata store
3.load data to Redshift using S3 COPY command
You can store tags column as varchar bit mask, i.e. a strictly defined bit sequence of 1s or 0s, so that if a purchase is marked by a tag there will be 1 and if not there will be 0, etc. For every row, you will have a sequence of 0s and 1s that has the same length as the number of tags you have. This sequence is sortable, however you would still need lookup into the middle but you will know at which specific position to look so you don't need like, just substring. For further optimization, you can convert this bit mask to integer values (it will be unique for each sequence) and make matching based on that but AFAIK Redshift doesn't support that yet out of box, you will have to define the rules yourself.
UPD: Looks like the best option here is to keep tags in a separate table and create an ETL process that unwraps tags into tabular structure of order_id, tag_id, distributed by order_id and sorted by tag_id. Optionally, you can create a view that joins the this one with the order table. Then lookups for orders with a particular tag and further aggregations of orders should be efficient. There is no silver bullet for optimizing this in a flat table, at least I don't know of such that would not bring a lot of unnecessary complexity versus "relational" solution.

how can I group sum and count with sequel ORM and postgresl?

This is too tough for me guys. It's for Jeremy!
I have two tables (although I can also envision needing to join a third table) and I want to sum one field and count rows, in the same, table while joining with another table and return the result in json format.
First of all, the data type field that needs to be summed, is numeric(10,2) and the data is inserted as params['amount'].to_f.
The tables are expense_projects which has the name of the project and the company id and expense_items which has the company_id, item and amount (to mention just the critical columns) - the "company_id" columns are disambiguated.
So, the following code:
expense_items = DB[:expense_projects].left_join(:expense_items, :expense_project_id => :project_id).where(:project_company_id => company_id).to_a.to_json
works fine but when I add
expense_total = expense_items.sum(:amount).to_f.to_json
I get an error message which says
TypeError - no implicit conversion of Symbol into Integer:
so, the first question is why and how can this be fixed?
Then I want to join the two tables and get all the project names form the left (first table) and sum amount and count items in the second table. I have tried
DB[:expense_projects].left_join(:expense_items, :expense_items_company_id => expense_projects_company_id).count(:item).sum(:amount).to_json
and variations of this, all of which fails.
I would like a result which gets all the project names (even if there are no expense entries and returns something like:
project item_count item_amount
pr 1 7 34.87
pr 2 0 0
and so on. How can this be achieved with one query returning the result in json format?
Many thanks, guys.
Figured it out, I hope this helps somebody else:
DB[:expense_projects___p].where(:project_company_id=>user_company_id).
left_join(:expense_items___i, :expense_project_id=>:project_id).
select_group(:p__project_name).
select_more{count(:i__item_id)}.
select_more{sum(:i__amount)}.to_a.to_json

Oracle 9i Sub query

Hi Can any one help me out of this query forming logic
SELECT C.CPPID, c.CPP_AMT_MANUAL
FROM CPP_PRCNT CC,CPP_VIEW c
WHERE
CC.CPPYR IN (
SELECT C.YEAR FROM CPP_VIEW_VIEW C WHERE UPPER(C.CPPNO) = UPPER('123')
AND C.CPP_CODE ='CPP000000000053'
and TO_CHAR(c.CPP_DATE,'YYYY/Mon')='2012/Nov'
)
AND UPPER(C.CPPNO) = UPPER('123')
AND C.CPP_CODE ='CPP000000000053'
and TO_CHAR(c.CPP_DATE,'YYYY/Mon') = '2012/Nov';
Please Correct me if i formed wrong query structure, in terms of query Performance and Standards. Thanks in Advance
If you have some indexes or partitioned tables I would not use functions on columns but on variables, to be able to use indexes/select partitions.
Also I use ANSI 92 SQL syntax. You don't specify(or not directly) a join contition between cpp_prcnt and cpp_view so it is actually a cartesian product(cross join)
SELECT C.CPPID, c.CPP_AMT_MANUAL
FROM CPP_PRCNT CC
CROSS JOIN CPP_VIEW c
WHERE
CC.CPPYR IN (
SELECT C.YEAR
FROM CPP_VIEW_VIEW C
WHERE C.CPPNO = '123'
AND C.CPP_CODE ='CPP000000000053'
AND trunc(c.CPP_DATE,'MM')=to_date('2012/Nov','YYYY/Mon')
)
AND C.CPPNO = '123'
AND C.CPP_CODE ='CPP000000000053'
AND trunc(c.CPP_DATE,'MM')=to_date('2012/Nov','YYYY/Mon')
If you show us the definition of cpp_view_view(seems to be a view over cpp_view), the definition(if simple) of CPP_VIEW and what you're trying to achieve, I bet there are more things to be improved/fixed.
There are a couple of things you could improve:
if possible, get rid of the UPPER() in the comparison - this will render any indices useless. If that's not possible, consider a function-based index on UPPER(CPPNO)
do not convert your DATE column to a string to compare it with a string - do it the other way round (i.e. convert your string to a date => only one conversion needed instead of one per table row, use of indices possible)
play around with EXISTS instead of IN, as suggested by Dileep - might be faster

PL/SQL: UPDATE inside CURSOR, but some data is NULL

I'm still learning some of the PL/SQL differences, so this may be an easy question, but... here goes.
I have a cursor which grabs a bunch of records with multiple fields. I then run two separate SELECT statements in a LOOP from the cursor results to grab some distances and calculate those distances. These work perfectly.
When I go to update the table with the new values, my problem is that there are four pieces of specific criteria.
update work
set kilometers = calc_kilo,
kilo_test = test_kilo
where lc = rm.lc
AND ld = rm.ld
AND le = rm.le
AND lf = rm.lf
AND code = rm.code
AND lcode = rm.lcode
and user_id = username;
My problem is that this rarely updating because rm.lf and rm.le have NULL values in the database. How can I combat this, and create the correct update.
If I'm understanding you correctly, you want to match lf with rm.lf, including when they're both null? If that's what you want, then this will do it:
...
AND (lf = rm.lf
OR (lf IS NULL AND rm.lf IS NULL)
)
...
It's comparing the values of lf and rm.lf, which will return false if either is null, so the OR condition returns true if they're both null.
I have a cursor which grabs a bunch of records with multiple fields. I then run two separate SELECT statements in a LOOP from the cursor results to grab some distances and calculate those distances. These work perfectly.
When I go to update the table with the new values, my problem is that there are four pieces of specific criteria.
The first thing I'd look at is not using a cursor to read data, then make calculations, then perform updates. In 99% of cases it's faster and easier to just run updates that do all of this in a single step
update work
set kilometers = calc_kilo,
kilo_test = test_kilo
where lc = rm.lc
AND ld = rm.ld
AND NVL(le,'x') = NVL(rm.le,'x')
AND NVL(lf,'x') = NVL(rm.lf,'x')
AND code = rm.code
AND lcode = rm.lcode
and user_id = username;

Resources