Get matching records in Laravel - laravel

I'm not sure if this is obvious and I don't see it because it is late here, but right now I'm struggling with the following:
I'm trying to find out if there is a match somewhere. So, profile 2 liked profile 1 and also, profile 1 liked profile 2. That would be a match.
I tried combining arrays but that that ran nowhere. ._. How could I archive this in Laravel queries?

$likey = DB::table('likes AS liker')
->join('likes AS liked', 'liker.liked_id', '=', 'liked.liker_id')
->select('liked.liker_id', 'liked.liked_id')
->where('liker.liker_id', '=', 'liked.liked_id')
->get();
Something along those lines.
EDIT: Just to clarify this solution so you don't get into temptation of copy pasting this and never figuring out what just happened here;
we are joining (using INNER JOIN, very important) this table to itself simpy because (just like you've said it) we have to check it twice. First for the liker (the one who liked someones profile first), than for the liked (the one who responded with a like in return) user. Having that in mind, we join this table checking liked_id from the first table on liker_id on the second table.
Which should give us joined result looking like:
liker.liker_id | liker.liked_id | liked.liker_id | liked.liked_id
-----------------------------------------------------------------
2 | 1 | 1 | 2
1 | 2 | 2 | 1
Mind you this will give us duplicates! (VERY IMPORTANT).
Having that in mind I would think about redisigning your table. For example adding boolean column named "liked_back" will give you much cheaper and cleaner queries rather than doing whatever this is...

Related

Best way to call functions while iterating through a foreach loop in Laravel Blade Template

I apologize if this is a stupid question – but I haven't found any great answers from searching online.
I have a database table of test scores, that looks something like this:
user | q1 | q2 | q3 | q4 | q5
-----------------------------------
user1 3 3 2 1 5
user2 4 2 1 4 5
user1 4 3 3 2 5
Any given user can have multiple entries in the table.
In my blade file, I am iterating through all of the rows in the table:
#foreach($scores as $score)
<tr>
<td>{{$score->user}}</td>
<td>{{$score->q1}}</td>
<td>{{$score->q2}}</td>
<td>{{$score->q3}}</td>
<td>{{$score->q4}}</td>
<td>{{$score->q5}}</td>
<td>{{$score->getTotalScore()}}</td>
</tr>
#endforeach
It is the function "getTotalScore()" in the last table cell that is causing the problems. I want to perform a slightly complex calculation based on all of the scores of the table – but I prefer not to do it inside the blade file.
I DO have a working version where I make use of #php / #endphp inside the blade file, and do the calculations that way – but it tends to go against my aim to have as little "calculating" in my view as possible.
Trying to put a function in the Model doesn't work, because I am not really returning a relationship. So really, I'm just not sure how to call a function, while in the middle of a #foreach() loop, iterating through returned results.
Can someone please point me in the right direction here?
Maybe you can use += in order to sum?

Creating advanced SUMIF() calculations in Quicksight

I have a couple of joined Athena tables in Quicksight. The data looks something like this:
Ans_Count | ID | Alias
10 | 1 | A
10 | 1 | B
10 | 1 | C
20 | 2 | D
20 | 2 | E
20 | 2 | F
I want to create a calculated field such that it sums the Ans_Count column based on distinct IDs only. i.e., in the example above the result should be 30.
How do I do that?? Thanks!
Are you looking for the sum before or after applying a filter?
Sumif(Ans_Count,ID) may be what your looking for.
If you need to always return the result of the sum, regardless of the filter on the visual, look at the sumOver() function.
You can use distinctCountOver at PRE_AGG level to count unique number of values for a given partition. You could use that count to drive the sumIf condition as well.
Example : distinctCountOver(operand, [partition fields], PRE_AGG)
More details about what will be visual's group by specification and an example where there duplicate IDs will help give a specific solution.
It might even be as simple as minOver(Ans_Count, [ID], PRE_AGG) and using SUM aggregation on top of it in the visual.
If you want another column with the values repeated, use sumOver(Ans_Count, [ID], PRE_AGG). Or, if you want to aggregate via QuickSight, you would use sumOver(sum(Ans_Count), [ID]).
I agree with the above suggestions to use sumOver(sum(Ans_Count), [ID]).
I have yet to understand the use cases for pre_agg, so if anyone has concrete examples please share them!
Another suggestion would be to do a sumover + partition by in your table (if possible) before uploading the dataset, then checking if the results matche with Quicksight's aggregations. I find Quicksight can be tricky with calculated fields, aggregations, and nested ifs so I've been doing calculations in SQL where possible before bringing it in to quicksight to have a better grasp of what the outputs should look like. This obviously is an extra step, but can help in understanding how quicksight pulls off calcs and brings up figures (as the documentation doesn't always give much), and spotting things that don't look right (I've had a few) before you share your analysis with a wider group.

Referencing a table given in column as text

this should be an easy question. I'm fairly new to Power Query.
I have a report table, there is a column "Queries" which are names of queries i have in my workbook.
I wish to add a column to count the number of rows in the queries.
The formula i use is =Table.AddColumn(Source, "RowCount", each Table.RowCount([Query]))
My report table would looks like below:
| Queries | RowCount |
| Qry Apple | |
| Qry Orang | |
However I am getting the error:
Expression.Error: We cannot convert the value "Qry Apple" to type Table.
Details:
Value=Qry Apple
Type=Type
Does anyone know how to solve this?
Thanks!
= Table.AddColumn(Source, "Row Count", each Table.RowCount(Expression.Evaluate([Query],#sections[Section1])))
It seems like this is one of those things that requires random obscure knowledge about the structure of PQ. Expression.Evaluate needs to know the "environment" to resolve the string in, and it appears tables in PQ are sitting in a record called [Section1] in a global query called #sections.
I found a solution to this from Chris Webb's BI Blog: Expression.Evaluate() In Power Query/M.
Basically, we need to use Expression.Evaluate in order to read the text in the [Query] column as a table. Note also that you need to include the #shared parameter so it has access to the necessary environment. (For more details, see the linked blog and the references it gives.)
= Table.AddColumn(Source, "RowCount", each Table.RowCount(Expression.Evaluate([Query], #shared)))
Finally, found the answer, after a year of practicing,
I tried using Expression.Evaluate as suggested but to no avail, hence I don't think the function can properly convert a text into a #table. I stand to be corrected.
The solution do make use of #sections, thanks so much for the brilliant idea by #Alexis Olson and #Wedge!
I used the Record.Field function to "Get" the tables into a column as table objects, then finish it off with the Table.RowCount function. For clarity, I split them into two steps.
So here it is:
let
Source = Excel.CurrentWorkbook(),
MyTables = Source{[Name="MyTables"]}[Content],
GetTblObj = Table.AddColumn(MyTables, "MyTables", each Record.Field(#sections[Section1],[Query])),
RowCount = Table.AddColumn(GetTblObj, "RowCount", each Table.RowCount([MyTables]))
in
RowCount

How to optimize massive amounts of data in mysql tables using a join

Perhaps using a JOIN isn't the best option here, but here's the scenario:
I have two tables, one is for houses, the other for objects in that house.
I have 50 houses and 8000 objects.
Lastly, each object will be either black or white (boolean).
Each object must be associated with each house and each object must be either black or white, which means, through my current design, there are going to be 400,000 records (8,000 ones, 8,000 twos all the way up to 50) in the objects table! Not the best for optimization. And my site turned into geriatric snails smoking ganja when I tried to load the query on my webpage. It died.
The table I have for houses looks like this:
==============================
House| Other cols | Other cols
==============================
1 | |
2 | |
3 | |
4 | |
to 50
The table I have for objects looks like this:
============================
House_ID | Object | Color
============================
1 | 1 | 1
1 | 2 | 1
1 | 3 | 0
1 | 4 | 1
1 | 5 | 0
"House_ID" increments to 2 once "Object" reaches 8,000. This incrementing continues until House_ID reaches 50.
There must be a better way to create an association between the house and the objects where each object must have that specific house ID and it is not quite so taxing on the server.
BTW, I'm using an INNER JOIN to combine both tables. I think this might be wrong, but don't know a way around it. Doing SQL queries in phpMyAdmin.
How would I join or set up my table/queries so that it's not so cumbersome?
You probably need to investigate indexing your tables. This is actually a fairly small data set for what you are doing.
If your table names are houses and objects, try:
CREATE INDEX houses_index ON houses (House)
and
CREATE INDEX house_objects_index ON objects (HouseID,Object)
This will make your queries run MUCH faster, if, as I presume, indexes do not already exist.
(You might also want to keep you column names consistent between tables; calling the field House in one table and HouseID in another is, I think, more confusing than calling it HouseID both places.)

How do I return multiple columns of data using ImportXML in Google Spreadsheets?

I'm using ImportXML in a Google Spreadsheet to access the user_timeline method in the Twitter API. I'd like to extract the created_at and text fields from the response and create a two-column display of the results.
Currently I'm doing this by calling the API twice, with
=ImportXML("http://twitter.com/status/user_timeline/matthewsim.xml?count=200","/statuses/status/created_at")
in the cell at the top of one column, and
=ImportXML("http://twitter.com/status/user_timeline/matthewsim.xml?count=200","/statuses/status/text")
in another.
Is there a way for me to create this display with a single call?
ImportXML supports using the xpath | separator to include as many queries as you like.
=ImportXML("http://url"; "//#author | //#catalogid| //#publisherid")
However it does not expand the results into multiple columns. You get a single column of repeating triplets (or however many attributes you've selected) as shown below in column A.
The following is deprecated
2015.06.16: continue is not available in "the new Google Sheets" (see: The Google Documentation for continue).
However you don't need to use the automatically inserted CONTINUE() function to place your results.
=CONTINUE($A$2, (ROW()-ROW($A$2)+1)*$A$1-B$1, 1)
Placed in B2 that should cleanly fill down and right to give you sane column data.
ImportXML is in A2.
A3 and below are how the CONTINUE() functions are automatically filled in.
A1 is the number of attributes.
B1:D1 are the attribute index for their columns.
Another way to convert the rows of =CONTINUE() into columns is to use transpose():
=transpose(importxml("http://url","//a | //b | //c"))
Just concatenate your queries with "|"
=ImportXML("http://twitter.com/status/user_timeline/matthewsim.xml?count=200","/statuses/status/created_at | /statuses/status/text")
I posed this question to the Google Support Forum and this is was a solution that worked for me:
=ArrayFormula(QUERY(QUERY(IFERROR(IF({1,1,0},IF({1,0,0},INT((ROW(A:A)-1)/2),MOD(ROW(A:A)-1,2)),IMPORTXML("http://example.com","//td/a | //td/a/#href"))),"select min(Col3) where Col3 <> '' group by Col1 pivot Col2",0),"offset 1",0))
Replace the contents of IMPORTXML with your data and query and see if that works for you. I
Apparently, this attempts to invoke the IMPORTXML function only once. It's a solution for now, at least.
Here's the full thread.
This is the best solution (NOT MINE) posted in the comments below. To be honest, I'm not sure how it works. Perhaps #Pandora, the original poster, could provide an explanation.
=ArrayFormula(iferror(hlookup(1,{1;ARRAY},(row(A:A)+1)*2-transpose(sort(row(A1:A2)+0,1,0)))))
This is a very ugly solution and doesn't even explain how it works. At least I couldn't get it to work due to multiple errors, like i.e. to much parameters for IF (because an array is used). A shorter solution can be found here =ArrayFormula(iferror(hlookup(1,{1;ARRAY},(row(A:A)+1)*2-transpose(sort(row(A1:A2)+0,1,0))))) "ARRAY" can be replaced with IMPORTXML-Function. This function can be used for as much XPATHS one wants. – Pandora Mar 7 '19 at 15:51
In particular, it would be good to know how to modify the formula to accommodate more columns.

Resources