Delete duplicates records from large Laravel collection - laravel

I have a large database table (~1 million records) which I need to purge the duplicate records. The table structure is as follows:
|----|-------------|-----|-----|--------------------|
| id | relation_id | foo | bar | timestamp |
|----|-------------|-----|-----|--------------------|
| 1 | 1 |14.20|0.22 |2019-10-21 14:00:01 |
| 2 | 1 |14.20|0.22 |2019-10-21 14:00:01 |
| 3 | 1 |14.20|0.22 |2019-10-21 14:00:01 |
| 4 | 2 |10.36|0.75 |2019-10-21 14:00:01 |
| 5 | 2 |10.36|0.75 |2019-10-21 14:00:01 |
| 6 | 2 |10.36|0.75 |2019-10-21 14:00:01 |
|----|-------------|-----|-----|--------------------|
As per the example above, there are a lot of records that have the exact same combination of values relation_id, foo, bar and timestamp. I need to create a script that will run to identify the unique values and then delete and duplicate references. So I would end up with something like:
|----|-------------|-----|-----|--------------------|
| id | relation_id | foo | bar | timestamp |
|----|-------------|-----|-----|--------------------|
| 1 | 1 |14.20|0.22 |2019-10-21 14:00:01 |
| 4 | 2 |10.36|0.75 |2019-10-21 14:00:01 |
|----|-------------|-----|-----|--------------------|
I have tested looping through the relation_id (as there are only 20 unique values) and then running something like this to create a collection of the unique records:
$unique = collect([]);
$collection = Model::where('relation_id', $relation_id)->chunk(100, function($items) use ($unique) {
$unique->push($items->unique()->values()->all());
});
From that, I had planned to loop through all of the Model records and delete if the item was not within the $unique collection. Something like this:
Model::chunk(100, function($items) {
foreach ($items as $item) {
if(!$unique->contains('id', $item->id)){
$item->delete;
}
}
});
My problem is as the database table is so large, I cannot test if this logic works. Running the first part of the above script (to populate $unique) for a single $relation_id ran in tinker for 30 minutes without yielding results.
I'm relatively confident this isn't the best approach to delete duplicate records as my approach requires multiple queries which I assume could be optimised (which is critical when dealing with such a large table).
So what is the most efficient way to query a database table to check for unique records (based on multiple columns) and delete the duplicate records?

You can allow the database to do the heaving lifting here. You can query the database using GROUP BY, then remove everything that doesn't match your query.
$ids = Model::groupBy(['relation_id', 'foo', 'bar', 'timestamp'])
->get(['id'])
->all();
This translates to the following SQL:
SELECT id FROM models GROUP BY relation_id, foo, bar, timestamp;
So now $ids is an array of IDs where the other columns are unique ([1, 4]). So you can execute the following to remove all other rows from the DB:
Model::whereNotIn('id', $ids)->delete();
However, since $ids is probably huge, you are likely to hit some upper limit constraints. In that case, you can try using array_chunk() to add multiple whereNotIn clauses to the query:
$query = Model::query();
foreach(array_chunk($ids, 500) as $chunk) {
$query->whereNotIn('id', $chunk);
}
$query->delete();
I created an SQL Fiddle where you can test this out.

For anyone experiencing a similar issue here, I opted to use MySQL to handle the tidy up of the database as it was far more efficient than loading it into memory using Eloquent.
This is the script I used:
DELETE table_name FROM table_name LEFT OUTER JOIN (
SELECT MIN(ID) AS minID FROM table_name GROUP BY table_name.relation_id, table_name.timestamp
) AS keepRowTable ON table_name.ID = keepRowTable.minID
WHERE keepRowTable.minID IS NULL
I accepted #Vince's answer because his approach works using Laravel but we ran into issues when trying to process such a large dataset. Plus he's a hero for being so responsive in the comments!

Related

Assign id from foreign table to current table laravel

I am using laravel eloquent to get the query results. I have two tables below:
users table:
| id | department_id
| 1 | 1
| 2 | 3
| 3 | 2
department table:
| id | name
| 1 | A
| 2 | B
| 3 | C
| 4 | D
| 5 | E
How to get one unassigned ID, not existing department ID, into the users table? Example, 4 & 5 are not yet existing in users table, so how can I get 4 or 5 using an eloquent?
I am thinking of this but this is not correct.
Department::select('department.id as id')
->leftJoin('users', 'users.department_id' ,'department.id')
->pluck('id');
Does anybody know?
Try this
//here you first got all the department which is assigned to user
$assigned_dept = Users::pluck('department_id')->toArray();
$department = array_values($assigned_dept); //output:['1','3','2']
//here you can select department which is not assigned to user with limit
$user = Department::whereNotIn('id',$department)
->limit(1)->get();
hope it works for you..
You can do it like this:
Department::whereNotIn('id', User::pluck('department_id'))->get();
I believe below code will work for you :
Department::select('department.id as id')
->whereNotIn('id', User::whereNotNull('department_id')->pluck('department_id'))
->pluck('id');
From the answers others, there is a problem with the array if department_id is NULL. So, I added whereNotNull and also last() and then the problem is solved. Let me post the answer here:
Department::select('department.id as id')
->whereNotIn('id', User::whereNotNull('department_id')->pluck('department_id'))
->pluck('id')
->last(); // since I only need one row

Laravel Eloquent Model with multiple IDs

I will have a table that is full of information that involves other tables (relations). Most of the information in this table will only have the ID's of the referencing related table. If I were to use "products" as an example for this table it may look like this for some of the columns:
id | name | type_id | price_id | location_id | sale_id
----------------------------------------------------------------
1 | prod1 | 1 | 1 | 2 | 4
2 | prod2 | 2 | 1 | 1 | 1
3 | prod3 | 3 | 2 | 6 | 2
4 | prod4 | 1 | 2 | 3 | 4
I'm trying to take this "products" table and dump it out into a list. I would need to look up all of the items in these columns as I dump it out (the relation). I know how to do belongsToMany and hasMany, but I'm not sure how I can do this in one shot with an Eloquent model if I have a "products" model? Should I just make the products table just a pivot table? Can I do it with an Eloquent model or should I use query builder directly? I think if I were to use withPivot it would return the extra columns but the raw ID value from the column. I would need the value lookup from their respective table (the relation).
Tried something like this:
public function productItems(){
return $this->belongsToMany(Product::class)->withPivot(["type_id","price_id",...]);
}
As suggested by #BagusTesa, you should eager load your relations:
$products = Product::with(['type', 'price', 'location'])->get();
That will query for the related models allowing you to access them as model properties:
foreach ($products as $product){
// $product->type;
// $product->price;
// $product->location;
}

Retrieving 1 pivot row per relation based on pivot values in Laravel belongsToMany relation

Background - I'm creating a system where administrators can create arbitrary fields, which are then combined into a form. Users then complete this form, and the values input against each field are stored in a table. However, rather than overwrite the previous value, I plan on keeping each past value as individual rows in the table. I then want to be able to display the contents submitted in each form, but only the most recently submitted value.
Problem
I have a model, Service, that features a belongsToMany relationship with another model, Field. This relationship is defined as:
public function fields()
{
return $this->belongsToMany('App\Field')->withPivot('id', 'value', 'date')->withTimestamps();
}
The intermediary table has 3 values I wish to retrieve, id, value and date.
A Service may have 1 or more Fields, and for each field it may also have more than 1 pivot row. That is, a single Service/Field pairing may have multiple entries in the pivot table with different pivot values. For example:
Table field_service
id | service_id | field_id | value | created_at
------------------------------------------------------
1 | 1 | 1 | lorem | 2018-02-01
2 | 1 | 1 | ipsum | 2018-01-01
3 | 1 | 1 | dolor | 2017-12-01
4 | 1 | 2 | est | 2018-03-10
5 | 1 | 2 | sicum | 2018-03-09
6 | 1 | 2 | hoci | 2018-03-08
What I want is to get either:
A specific row from the pivot table for each Field associated with the Service, or
A specific value from the pivot table for each Field associated with the Service.
For example - in the table above, I would like the Service with ID 1 to have 2 Fields in the relationship, with each Field containing an attribute for the corresponding pivot value. The Fields attached would be specified by the corresponding pivot table entry having the most recent date. Something akin to:
$service->fields()[0]->value = "lorem"
$service->fields()[1]->value = "est"
I feel there's an obvious, 'Laravel'ly solution out there, but it eludes me...
Update
Somewhat unbelievably this is another case of me not understanding windowing functions. I asked a question 7 years ago that is basically this exactly problem, but with raw MySQL. The following raw MySQL basically gives me what I want, I just don't know how to Laravelise it:
SELECT services.name, fields.name, field_service.value, field_service.created_at, field_service.field_id
FROM field_service
INNER JOIN
(SELECT field_id, max(created_at) as ts
FROM field_service
WHERE service_id = X
GROUP BY field_id) maxt
ON (field_service.field_id = maxt.field_id and field_service.created_at = maxt.ts)
JOIN fields ON fields.id = field_service.field_id
JOIN services ON services.id = field_service.service_id
Try this:
public function fields()
{
$join = DB::table('field_service')
->select('field_id')->selectRaw('max(`created_at`) as `ts`')
->where('service_id', DB::raw($this->id))->groupBy('field_id');
$sql = '(' . $join->toSql() . ') `maxt`';
return $this->belongsToMany(Field::class)->withPivot('id', 'value', 'created_at')
->join(DB::raw($sql), function($join) {
$join->on('field_service.field_id', '=', 'maxt.field_id')
->on('field_service.created_at', '=', 'maxt.ts');
});
}
Then use it like this:
$service->fields[0]->pivot->value // "lorem"
$service->fields[1]->pivot->value // "est"

Oracle Identify not unique values in a clob column of a table

I want to identify all rows whose content in a clob column is not unique.
The query I use is:
select
id,
clobtext
from
table t
where
(select count(*) from table innerT where dbms_lob.compare(innerT.clobtext, t.clobtext) = 0)>1
However this query is very slow. Any suggestions to speed it up? I already tried to use the dbms_lob.getlength function to eliminate more elements in the subquery but I didn't really improve the performance (feels the same).
To make it more clear an example:
table
ID | clobtext
1 | a
2 | b
3 | c
4 | d
5 | a
6 | d
After running the query. I'd like to get (order doesn't matter):
1 | a
4 | d
5 | a
6 | d
In the past I've generated checksums (in my C# code) for each clob.
Whilst this will inccur a one off increase in io (to generate the checksum)
subsequent scans will be quicker, and you can index the value too
TK has a good PL\SQL example here:
Ask Tom

Will this type of pagination scale?

I need to paginate on a set of models that can/will become large. The results have to be sorted so that the latest entries are the ones that appear on the first page (and then, we can go all the way to the start using 'next' links).
The query to retrieve the first page is the following, 4 is the number of entries I need per page:
SELECT "relationships".* FROM "relationships" WHERE ("relationships".followed_id = 1) ORDER BY created_at DESC LIMIT 4 OFFSET 0;
Since this needs to be sorted and since the number of entries is likely to become large, am I going to run into serious performance issues?
What are my options to make it faster?
My understanding is that an index on 'followed_id' will simply help the where clause. My concern is on the 'order by'
Create an index that contains these two fields in this order (followed_id, created_at)
Now, how large is the large we are talking about here? If it will be of the order of millions.. How about something like the one that follows..
Create an index on keys followed_id, created_at, id (This might change depending upon the fields in select, where and order by clause. I have tailor-made this to your question)
SELECT relationships.*
FROM relationships
JOIN (SELECT id
FROM relationships
WHERE followed_id = 1
ORDER BY created_at
LIMIT 10 OFFSET 10) itable
ON relationships.id = itable.id
ORDER BY relationships.created_at
An explain would yield this:
+----+-------------+---------------+------+---------------+-------------+---------+------+------+-----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+------+---------------+-------------+---------+------+------+-----------------------------------------------------+
| 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Impossible WHERE noticed after reading const tables |
| 2 | DERIVED | relationships | ref | sample_rel2 | sample_rel2 | 5 | | 1 | Using where; Using index |
+----+-------------+---------------+------+---------------+-------------+---------+------+------+-----------------------------------------------------+
If you examine carefully, the sub-query containing the order, limit and offset clauses will operate on the index directly instead of the table and finally join with the table to fetch the 10 records.
It makes a difference when at one point your query makes a call like limit 10 offset 10000. It will retrieve all the 10000 records from the table and fetch the first 10. This trick should restrict the traversal to just the index.
An important note: I tested this in MySQL. Other database might have subtle differences in behavior, but the concept holds good no matter what.
you can index these fields. but it depends:
you can assume (mostly) that the created_at is already ordered. So that might by unnecessary. But that more depends on you app.
anyway you should index followed_id (unless its the primary key)

Resources