How do I handle complex relations with Laravel? - laravel

Although code-based answers are much appreciated, but I'm interested in knowing the logic behind handling this, as it might happen again later.
I'm working on an eCommerce project. Most products are variables. Imagine a shirt for example.
A shirt can have 5 different sizes ( XS, S, M, ML, L ) and 3 colors ( Red, Green, Blue ). These are examples, these attributes are dynamic and can be edited any time. My approach was to create 3 different tables for the product itself:
- main_product_table
- id
- product_variant_table
- id
- main_product_id
- product_variant_combination_table
- id
- variant_id
- attribute_name_id
- attribute_value_id
And 2 more tables for the attributes:
- attribute_name_table
- id
- attribute_name
- attribute_value_table
- id
- attribute_value
So, for example the said shirt will now creates the following records:
2 records in attribute_name_table
3 records in attribute_value_table
1 record in main_product_table
3x5 records in the product_variant_table
2x3x5 records in the product_variant_combination_table ( 2 rows each holding the value of 1 attribute )
Now consider querying all products that include a XS size, or query the product with all of its variation data.
My questions are:
Can eloquent models handle this?
If so, should I create 5 models, 1 per table?
If not, should I create a single ( or 2 ) Models ( Product, Attribute ) and the connect them using custom SQLs and methods?
What are the relations here? Attributes seem to have one-to-many to me, but combinations also seem one-to-many and both of these are also connected to each other.
I'm struggling to understand the relations between the above. It seems understandable via SQL, but I can't relate them to each other via Eloquents.
Also, the tables seems to grow massive rapidly. Is this approach proper at all?

Your approach seems very difficult to maintain in the long term. I don't understand why you would have to separate the name attribute from value.
I would create the product model, size and color with its migrations and a pivot table to store the stock
- size_table
-- id
-- name
- color_table
-- id
-- name
- product_table
-- id
-- type (shirt, pants, pullover ... etc.)
-- description
-- status
- stock_table
-- product_id (reference to the product table)
-- color_id (Reference to the color table)
-- size_id (Reference to the size table)
-- quantity (Quantity of products available with these attributes)
-- status
this way you have the same product with more than one combination stored in stock_table.
Now you only need to define the relationships between each model and then you will be able to access all the attributes from product_detail_table
$item->quantity
$item->product->description
$item->size->name
$item->color->name

Related

How to put measures from multiple tables into one matrix in Power BI?

I have 8 tables with data of sold products. Each table is about a unique product. In Power BI, I want to create a matrix, containing the sold quantities (values) per product (rows), per month (columns), and the number of unique customers who bought the products.
Each of the 8 tables with the sales data has the following structure. So the App ID is different for each table, but is constantly the same within a table. Example for a Cars table:
Customer ID Month App ID
29273 2020-3 1
90283 2018-5 1
55824 2016-12 1
55824 2018-10 1
55824 2021-1 1
So, a bicycle table would have the same structure, but then the App ID's would be, for example 2, in the entire table.
I have two tables that are connected with the 8 product tables in a one-to-many relationship. The Calendar table based on the Month column, and the App table based on the App ID column.
The table Calendar:
Month
2015-1
2015-2
2015-3
2015-4
2015-5
...
...
The table Apps:
ID Name
1 Cars
2 Bicycle
3 Scooter
4 ...
So, the structure is:
I created the Calendar en Apps tables so that I could use them for the matrix, but it doesn't work like I want so far. At the end, I want to create a matrix like this (where P = the number of products sold, and C = the number of customers in that month for that product):
Product 2015-1 2015-2 2015-3 2015-4 2015-5 ...
P C P C P C P C P C
Cars 3 2 5 5 7 6 2 1 4 2
Bicycle 11 9 17 14 5 5 4 4 8 6
Scooter ...
Skateboard ...
As mentioned, I made that Calendar and App table so that I can use the columns from it to fill the labels in the rows and columns. What I am unable to do is create such a 'general variable' of the number of products sold per product, and the number of customers associated with it.
Can someone explain to me how I can fill the matrix with the numbers of products (and customers) sold, so that the matrix looks like the one described above?
I think this is pretty straight forward. You actually don't need the 'Calendar' table as it only contains the same info as is already in the 'Sales' table.
You should configure the matrix like this:
Rows: 'Name' (from the 'Apps' table)
Columns: 'Month' (from the
'Sales' table)
Values:
C = Count distinct of CustomerId (from 'Sales' table) [this counts the unique customers per month and app)
P = Count of CustomerId (from 'Sales' table) [this counts the rows of the 'Sales' table which is your number of products if every row represents 1 sale)
The different aggregations (count distinct, count) can be found under the Values' options:

Type wise summation and subtracting in oracle

I have two table of my store and working on Oracle. Image First table describe about my transaction in store, there are two types of transaction (MR & SR), MR means adding products in Store and SR means removing products from my storage. What I wanted to do get the final closing of my storage. After transaction final Quantity every products as shown in Image. I have tried many solution but can't finish it. so I could not show now. Please help me to sort this problem. Thanks
You can use case as below to decrease and increase the quantity based on type and then group by Name and find the sum of quantity derived from the case statement to get your desired result.
select row_number() over (order by a.Name) as Sl,a.Name, sum(a.qntity) as qntity
from
(select t2.Name,case when t1.type='MR' then t2.qntity else -(t2.qntity) end as qntity
from table1 t1,table2 t2 where t1.oid=t2.table01_oid) a
group by a.Name;
This query will provide result as below:
SL NAME QNTITY
1 Balls 0
2 Books 6
3 Pencil 13

get Average of column in one to many Relation in laravel

I Have 2 table:
stores
store_feedbacks | store_id , user_id ,rating
rating is number between 1-5 , Now I want to get average of all rating for one store,How I can do it?
Assuming your relationships are correctly set up:
$store->ratings()->avg();
Docs: https://laravel.com/docs/5.7/queries#aggregates - they're from the queries page, but it applies to Eloquent as well.

Need a query that will satisfy two conditions from two tables

table a and table b, table a has two field, field 1 and 2, and table b has two fields, field 3 and 4.
where
tablea.field1 >= 4 and tableb.field3 = 'male'
is something like the above query possible, Ive tried something like this in my database although there are not errors and i get results, it checks whether both are true separately.
im going to try to be abit clear, and cant give out the query outright as much as i would like to (University reasons). so ill explain, table 1 has several columns of information one of which is number of kids, table two has more information on said kids, like gender.
so im having trouble creating a query where first it checks that a parent has 2 kids but two male kids, thus creating a relationship between parent table and kids table.
CREATE TABLE parent
(pID NUMBER,
numberkids INTEGER)
CREATE TABLE kids
(kID NUMBER,
father NUMBER,
mother NUMBER,
gender VARCHAR(7))
select
p.pid
from
kids k
inner join parent pm on pm.pid = k.mother
inner join parent pf on pf.pid = k.father,
parent p
where
p.numberkids >= 2 and k.gender = 'male'
/
this query checks that the parent has 2 kids or more and the kids gender is male, but i need it to check whether the parent has 2 kids and OF those kids is there 2 or more male kids (or in short to check whether the parent has 2 or more male kids).
sorry for the long winded explanation i modified the tables and the query from the one im actually going to use (so some mistakes might be there, but the original query work, just not how i want explained above). any help would be greatly appreciated.
The best thing to do would be to take the numberKids column out of the parent table ... you'll find it very difficult to maintain.
Anyway, something like this might do the trick:
SELECT p.pID
FROM parent p INNER JOIN kids k
ON p.pID IN (k.father, k.mother)
WHERE k.gender = 'male'
GROUP BY p.pID
HAVING COUNT(*) >= 2;

How to otimize select from several tables with millions of rows

Have the following tables (Oracle 10g):
catalog (
id NUMBER PRIMARY KEY,
name VARCHAR2(255),
owner NUMBER,
root NUMBER REFERENCES catalog(id)
...
)
university (
id NUMBER PRIMARY KEY,
...
)
securitygroup (
id NUMBER PRIMARY KEY
...
)
catalog_securitygroup (
catalog REFERENCES catalog(id),
securitygroup REFERENCES securitygroup(id)
)
catalog_university (
catalog REFERENCES catalog(id),
university REFERENCES university(id)
)
Catalog: 500 000 rows, catalog_university: 500 000, catalog_securitygroup: 1 500 000.
I need to select any 50 rows from catalog with specified root ordered by name for current university and current securitygroup. There is a query:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Where 100 - some catalog, 200 - some securitygroup, 300 - some university. This query return 50 rows from ~ 170 000 in 3 minutes.
But next query return this rows in 2 sec:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
I build next indexes: (catalog.id, catalog.name, catalog.owner), (catalog_securitygroup.catalog, catalog_securitygroup.index), (catalog_university.catalog, catalog_university.university).
Plan for first query (using PLSQL Developer):
http://habreffect.ru/66c/f25faa5f8/plan2.jpg
Plan for second query:
http://habreffect.ru/f91/86e780cc7/plan1.jpg
What are the ways to optimize the query I have?
The indexes that can be useful and should be considered deal with
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
So the following fields can be interesting for indexes
c: id, root
cs: catalog, securitygroup
cu: catalog, university
So, try creating
(catalog_securitygroup.catalog, catalog_securitygroup.securitygroup)
and
(catalog_university.catalog, catalog_university.university)
EDIT:
I missed the ORDER BY - these fields should also be considered, so
(catalog.name, catalog.id)
might be beneficial (or some other composite index that could be used for sorting and the conditions - possibly (catalog.root, catalog.name, catalog.id))
EDIT2
Although another question is accepted I'll provide some more food for thought.
I have created some test data and run some benchmarks.
The test cases are minimal in terms of record width (in catalog_securitygroup and catalog_university the primary keys are (catalog, securitygroup) and (catalog, university)). Here is the number of records per table:
test=# SELECT (SELECT COUNT(*) FROM catalog), (SELECT COUNT(*) FROM catalog_securitygroup), (SELECT COUNT(*) FROM catalog_university);
?column? | ?column? | ?column?
----------+----------+----------
500000 | 1497501 | 500000
(1 row)
Database is postgres 8.4, default ubuntu install, hardware i5, 4GRAM
First I rewrote the query to
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root < 50
AND cs.catalog = c.id
AND cu.catalog = c.id
AND cs.securitygroup < 200
AND cu.university < 200
ORDER BY c.name
LIMIT 50 OFFSET 100
note: the conditions are turned into less then to maintain comparable number of intermediate rows (the above query would return 198,801 rows without the LIMIT clause)
If run as above, without any extra indexes (save for PKs and foreign keys) it runs in 556 ms on a cold database (this is actually indication that I oversimplified the sample data somehow - I would be happier if I had 2-4s here without resorting to less then operators)
This bring me to my point - any straight query that only joins and filters (certain number of tables) and returns only a certain number of the records should run under 1s on any decent database without need to use cursors or to denormalize data (one of these days I'll have to write a post on that).
Furthermore, if a query is returning only 50 rows and does simple equality joins and restrictive equality conditions it should run even much faster.
Now let's see if I add some indexes, the biggest potential in queries like this is usually the sort order, so let me try that:
CREATE INDEX test1 ON catalog (name, id);
This makes execution time on the query - 22ms on a cold database.
And that's the point - if you are trying to get only a page of data, you should only get a page of data and execution times of queries such as this on normalized data with proper indexes should take less then 100ms on decent hardware.
I hope I didn't oversimplify the case to the point of no comparison (as I stated before some simplification is present as I don't know the cardinality of relationships between catalog and the many-to-many tables).
So, the conclusion is
if I were you I would not stop tweaking indexes (and the SQL) until I get the performance of the query to go below 200ms as rule of the thumb.
only if I would find an objective explanation why it can't go below such value I would resort to denormalisation and/or cursors, etc...
First I assume that your University and SecurityGroup tables are rather small. You posted the size of the large tables but it's really the other sizes that are part of the problem
Your problem is from the fact that you can't join the smallest tables first. Your join order should be from small to large. But because your mapping tables don't include a securitygroup-to-university table, you can't join the smallest ones first. So you wind up starting with one or the other, to a big table, to another big table and then with that large intermediate result you have to go to a small table.
If you always have current_univ and current_secgrp and root as inputs you want to use them to filter as soon as possible. The only way to do that is to change your schema some. In fact, you can leave the existing tables in place if you have to but you'll be adding to the space with this suggestion.
You've normalized the data very well. That's great for speed of update... not so great for querying. We denormalize to speed querying (that's the whole reason for datawarehouses (ok that and history)). Build a single mapping table with the following columns.
Univ_id, SecGrp_ID, Root, catalog_id. Make it an index organized table of the first 3 columns as pk.
Now when you query that index with all three PK values, you'll finish that index scan with a complete list of allowable catalog Id, now it's just a single join to the cat table to get the cat item details and you're off an running.
The Oracle cost-based optimizer makes use of all the information that it has to decide what the best access paths are for the data and what the least costly methods are for getting that data. So below are some random points related to your question.
The first three tables that you've listed all have primary keys. Do the other tables (catalog_university and catalog_securitygroup) also have primary keys on them?? A primary key defines a column or set of columns that are non-null and unique and are very important in a relational database.
Oracle generally enforces a primary key by generating a unique index on the given columns. The Oracle optimizer is more likely to make use of a unique index if it available as it is more likely to be more selective.
If possible an index that contains unique values should be defined as unique (CREATE UNIQUE INDEX...) and this will provide the optimizer with more information.
The additional indexes that you have provided are no more selective than the existing indexes. For example, the index on (catalog.id, catalog.name, catalog.owner) is unique but is less useful than the existing primary key index on (catalog.id). If a query is written to select on the catalog.name column, it is possible to do and index skip scan but this starts being costly (and most not even be possible in this case).
Since you are trying to select based in the catalog.root column, it might be worth adding an index on that column. This would mean that it could quickly find the relevant rows from the catalog table. The timing for the second query could be a bit misleading. It might be taking 2 seconds to find 50 matching rows from catalog, but these could easily be the first 50 rows from the catalog table..... finding 50 that match all your conditions might take longer, and not just because you need to join to other tables to get them. I would always use create table as select without restricting on rownum when trying to performance tune. With a complex query I would generally care about how long it take to get all the rows back... and a simple select with rownum can be misleading
Everything about Oracle performance tuning is about providing the optimizer enough information and the right tools (indexes, constraints, etc) to do its job properly. For this reason it's important to get optimizer statistics using something like DBMS_STATS.GATHER_TABLE_STATS(). Indexes should have stats gathered automatically in Oracle 10g or later.
Somehow this grew into quite a long answer about the Oracle optimizer. Hopefully some of it answers your question. Here is a summary of what is said above:
Give the optimizer as much information as possible, e.g if index is unique then declare it as such.
Add indexes on your access paths
Find the correct times for queries without limiting by rowwnum. It will always be quicker to find the first 50 M&Ms in a jar than finding the first 50 red M&Ms
Gather optimizer stats
Add unique/primary keys on all tables where they exist.
The use of rownum is wrong and causes all the rows to be processed. It will process all the rows, assigned them all a row number, and then find those between 0 and 50. When you want to look for in the explain plan is COUNT STOPKEY rather than just count
The query below should be an improvement as it will only get the first 50 rows... but there is still the issue of the joins to look at too:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
where rownum <= 50
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Also, assuming this for a web page or something similar, maybe there is a better way to handle this than just running the query again to get the data for the next page.
try to declare a cursor. I dont know oracle, but in SqlServer would look like this:
declare #result
table (
id numeric,
name varchar(255)
);
declare __dyn_select_cursor cursor LOCAL SCROLL DYNAMIC for
--Select
select distinct
c.id, c.name
From [catalog] c
inner join university u
on u.catalog = c.id
and u.university = 300
inner join catalog_securitygroup s
on s.catalog = c.id
and s.securitygroup = 200
Where
c.root = 100
Order by name
--Cursor
declare #id numeric;
declare #name varchar(255);
open __dyn_select_cursor;
fetch relative 1 from __dyn_select_cursor into #id,#name declare #maxrowscount int
set #maxrowscount = 50
while (##fetch_status = 0 and #maxrowscount <> 0)
begin
insert into #result values (#id, #name);
set #maxrowscount = #maxrowscount - 1;
fetch next from __dyn_select_cursor into #id, #name;
end
close __dyn_select_cursor;
deallocate __dyn_select_cursor;
--Select temp, final result
select
id,
name
from #result;

Resources