Propel, selecting just one column - propel

I am trying to run a query in propel that runs an aggregate function (SUM).
My Code
$itemQuery = SomeEntity::Create();
$itemQuery->withColumn('SUM(SomeColumn)', someColumn)
->groupBy(SomeForeignKey);
Problem
It should theoretically return the sum of every group of items but the problem is propel tries to fetch all columns, and also appends a bunch of other columns to the group by clause. This results in an unexpected categorisation and therefore the sum is incorrect.
Is there anyway to make propel fetch just the column I am running the aggregation function on so that the group by statement works as well?

You need to add a select statement for the column and the foreign key:
$itemQuery = SomeEntity::Create();
$itemQuery->select(array(SomeColumn, SomeForeignKey));
$itemQuery->withColumn('SUM(SomeColumn)', someColumn);
$itemQuery->groupBy(SomeForeignKey);

Related

New column indicating if row is the first instance of the value for the Entity ID using SQL instead of DAX

I currently have a column that is created using the following DAX formula (a calculating language used by platforms such as Power BI) which indicates if the listed activity is the first one ever for that Entity ID. Below is my DAX script if it helps at all:
// "Declares column name"
First Time Activity =
// "if the column 'Timestamp' is equal to..."
if('Activity Table'[Timestamp]=
// "...is equal to the earliest Timestamp for that Entity ID and Activity Name"
CALCULATE(min('Activity Table'[Timestamp]),
filter('Activity Table',
'Activity Table'[Entity ID] = earlier('Activity Table'[Entity ID]) &&
'Activity Table'[Activity Name] = earlier('Activity Table'[Activity Name])
)
)
// "...then return a 1. If not, then return a blank/null"
,1,BLANK())
But I need this now to be a column made in PL SQL rather than in DAX. Any help on the SQL script would be much appreciated since I'm fairly novice at SQL.
Thanks
You don't actually need a column. you can write your query as :
Select a
,decode(activity_date
,MIN(activity_date) over (partition by activity_id)
,'Y'
,'N') first_record_indicator
From activity_table a
But, if you table is too huge to actually query like this everytime, you can create a column named first_record_indicator and populate it in "BEFORE INSERT" trigger.
e.g. https://www.techonthenet.com/oracle/triggers/before_insert.php

Extract more rows from a left join with Laravel

I have to extract the rows where the created_at is inside the week. Unfortunately, only one line is extracted from me and no more lines as I expected. Why?
Query:
$scadenze = DB::table('processi')
->leftJoin('scadenze', 'processi.id', '=', 'scadenze.processo_id')
->where('responsabile',$utente->id)
->whereNotIn('scadenze.stato', [4,5])
->whereBetween('scadenze.termine_stimato',[\Carbon::now()->startOfWeek(), Carbon::now()->endOfWeek()])
->avg('tempistica');
This query extract just one row, but in reality many more lines should be extracted.
Because ->avg('tempistica'); return average value from all your rows in this query, i.e. return just one value.
Solution:
I was wrong to use the avg with sum function. The rows were extracted correctly but instead of being added (by timing) an average was made. Thank you all for your help.

Laravel 5.4 - group by, count and join

I have two tables (models have the same name as the tables):
StatusNames: id|name
and
CurrentUserStatus: id|user_id|status_id
At the moment CurrentUserStatus is empty, and StatusNames have several records inserted (Active, Inactive, On Pause, Terminated...).
I need to get all data from CurrentUserStatus and show how much are there within each status (given the current tables, next to each status name there should be zero (0)).
Is this possible to do with one query?
So whatever I assumed you can do something like this:
$dataset = CurrentUserStatus::whereHas('status')
->with('status')
->withCount('status')
->orderBy('status_count', 'dsc')
->get();
Hope this helps.

how can I group sum and count with sequel ORM and postgresl?

This is too tough for me guys. It's for Jeremy!
I have two tables (although I can also envision needing to join a third table) and I want to sum one field and count rows, in the same, table while joining with another table and return the result in json format.
First of all, the data type field that needs to be summed, is numeric(10,2) and the data is inserted as params['amount'].to_f.
The tables are expense_projects which has the name of the project and the company id and expense_items which has the company_id, item and amount (to mention just the critical columns) - the "company_id" columns are disambiguated.
So, the following code:
expense_items = DB[:expense_projects].left_join(:expense_items, :expense_project_id => :project_id).where(:project_company_id => company_id).to_a.to_json
works fine but when I add
expense_total = expense_items.sum(:amount).to_f.to_json
I get an error message which says
TypeError - no implicit conversion of Symbol into Integer:
so, the first question is why and how can this be fixed?
Then I want to join the two tables and get all the project names form the left (first table) and sum amount and count items in the second table. I have tried
DB[:expense_projects].left_join(:expense_items, :expense_items_company_id => expense_projects_company_id).count(:item).sum(:amount).to_json
and variations of this, all of which fails.
I would like a result which gets all the project names (even if there are no expense entries and returns something like:
project item_count item_amount
pr 1 7 34.87
pr 2 0 0
and so on. How can this be achieved with one query returning the result in json format?
Many thanks, guys.
Figured it out, I hope this helps somebody else:
DB[:expense_projects___p].where(:project_company_id=>user_company_id).
left_join(:expense_items___i, :expense_project_id=>:project_id).
select_group(:p__project_name).
select_more{count(:i__item_id)}.
select_more{sum(:i__amount)}.to_a.to_json

Max/Min for whole sets of records in PIG

I have a set set of records that I am loading from a file and the first thing I need to do is get the max and min of a column.
In SQL I would do this with a subquery like this:
select c.state, c.population,
(select max(c.population) from state_info c) as max_pop,
(select min(c.population) from state_info c) as min_pop
from state_info c
I assume there must be an easy way to do this in PIG as well but I'm having trouble finding it. It has a MAX and MIN function but when I tried doing the following it didn't work:
records=LOAD '/Users/Winter/School/st_incm.txt' AS (state:chararray, population:int);
with_max = FOREACH records GENERATE state, population, MAX(population);
This didn't work. I had better luck adding an extra column with the same value to each row and then grouping them on that column. Then getting the max on that new group. This seems like a convoluted way of getting what I want so I thought I'd ask if anyone knows a simpler way.
Thanks in advance for the help.
As you said you need to group all the data together but no extra column is required if you use GROUP ALL.
Pig
records = LOAD 'states.txt' AS (state:chararray, population:int);
records_group = GROUP records ALL;
with_max = FOREACH records_group
GENERATE
FLATTEN(records.(state, population)), MAX(records.population);
Input
CA 10
VA 5
WI 2
Output
(CA,10,10)
(VA,5,10)
(WI,2,10)

Resources