In Laravel 9, there's syncWithPivotValues, which can sync several records with passed pivot values. Is there such thing for attach method? Basically I want to perform attach to several records with the same pivot values, such as:
// attach `role` 1, 2, 3 to `$user`; with `active` attribute set to `true`
$user->roles()->attach([1, 2, 3], ['active' => true]);
You can do something like this:
$user->roles()->attach([
1 => ['active' => true],
2 => ['active' => true],
3 => ['active' => true],
]);
Resource: https://laravel.com/docs/9.x/eloquent-relationships#updating-many-to-many-relationships
If you don't want to repeat ['active' => true] all the time, you can use array_fill_keys like this: array_fill_keys([1, 2, 3], ['active' => true]) so your code will look like:
$user->roles()->attach(array_fill_keys([1, 2, 3], ['active' => true]));
Resource: https://www.php.net/manual/en/function.array-fill-keys.php
Related
I'm trying to complete a project-based assessment for a job interview, and they only offer it in Ruby on Rails, which I know little to nothing about. I'm trying to take one hash that contains two or more hashes of arrays and combine the arrays into one array of hashes, while eliminating duplicate hashes based on an "id":value pair.
So I'm trying to take this:
h = {
'first' =>
[
{ 'authorId' => 12, 'id' => 2, 'likes' => 469 },
{ 'authorId' => 5, 'id' => 8, 'likes' => 735 },
{ 'authorId' => 8, 'id' => 10, 'likes' => 853 }
],
'second' =>
[
{ 'authorId' => 9, 'id' => 1, 'likes' => 960 },
{ 'authorId' => 12, 'id' => 2, 'likes' => 469 },
{ 'authorId' => 8, 'id' => 4, 'likes' => 728 }
]
}
And turn it into this:
[
{ 'authorId' => 12, 'id' => 2, 'likes' => 469 },
{ 'authorId' => 5, 'id' => 8, 'likes' => 735 },
{ 'authorId' => 8, 'id' => 10, 'likes' => 853 },
{ 'authorId' => 9, 'id' => 1, 'likes' => 960 },
{ 'authorId' => 8, 'id' => 4, 'likes' => 728 }
]
Ruby has many ways to achieve this.
My first instinct is to group them by id it and pick only first item from the array.
h.values.flatten.group_by{|x| x["id"]}.map{|k,v| v[0]}
Much cleaner approach is to pick the distinct item based on id after flattening the array of hash which is what Cary Swoveland suggested in the comments
h.values.flatten.uniq { |h| h['id'] }
TL;DR
The simplest solution to the problem that fits the data you posted is h.values.flatten.uniq. You can stop reading here unless you want to understand why you don't need to care about duplicate IDs with this particular data set, or when you might need to care and why that's often less straightforward than it seems.
Near the end I also mention some features of Rails that address edge cases that you don't need for this specific data. However, they might help with other use cases.
Skip ID-Specific Deduplication; Focus on Removing Duplicate Hashes Instead
First of all, you have no duplicate id keys that aren't also part of duplicate Hash objects. Despite the fact that Ruby implementations preserve entry order of Hash objects, a Hash is conceptually unordered. Pragmatically, that means two Hash objects with the same keys and values (even if they are in a different insertion order) are still considered equal. So, perhaps unintuitively:
{'authorId' => 12, 'id' => 2, 'likes' => 469} ==
{'id' => 2, 'likes' => 469, 'authorId' => 12}
#=> true
Given your example input, you don't actually have to worry about unique IDs for this exercise. You just need to eliminate duplicate Hash objects from your merged Array, and you have only one of those.
duplicate_ids =
h.values.flatten.group_by { _1['id'] }
.reject { _2.one? }.keys
#=> [2]
unique_hashes_with_duplicate_ids =
h.values.flatten.group_by { _1['id'] }
.reject { _2.uniq.one? }.count
#=> 0
As you can see, 'id' => 2 is the only ID found in both Hash values, albeit in identical Hash objects. Since you have only one duplicate Hash, the problem has been reduced to flattening the Array of Hash values stored in h so that you can remove any duplicate Hash elements (not duplicate IDs) from the combined Array.
Solution to the Posted Problem
There might be uses cases where you need to handle the uniqueness of Hash keys, but this is not one of them. Unless you want to sort your result by some key, all you really need is:
h.values.flatten.uniq
Since you aren't being asked to sort the Hash objects in your consolidated Array, you can avoid the need for another method call that (in this case, anyway) is a no-op.
"Uniqueness" Can Be Tricky Absent Additional Context
The only reason to look at your id keys at all would be if you had duplicate IDs in multiple unique Hash objects, and if that were the case you'd then have to worry about which Hash was the correct one to keep. For example, given:
[ {'id' => 1, 'authorId' => 9, 'likes' => 1_920},
{'id' => 1, 'authorId' => 9, 'likes' => 960} ]
which one of these records is the "duplicate" one? Without other data such as a timestamp, simply chaining uniq { h['id' } or merging the Hash objects will either net you the first or last record respectively. Consider:
[
{'id' => 1, 'authorId' => 9, 'likes' => 1_920},
{'id' => 1, 'authorId' => 9, 'likes' => 960}
].uniq { _1['id'] }
#=> [{"id"=>1, "authorId"=>9, "likes"=>1920}]
[
{'id' => 1, 'authorId' => 9, 'likes' => 1_920},
{'id' => 1, 'authorId' => 9, 'likes' => 960}
].reduce({}, :merge)
#=> {"id"=>1, "authorId"=>9, "likes"=>960}
Leveraging Context Like Rails-Specific Timestamp Features
While the uniqueness problem described above may seem out of scope for the question you're currently being asked, understanding the limitations of any kind of data transformation is useful. In addition, knowing that Ruby on Rails supports ActiveRecord::Timestamp and the creation and management of timestamp-related columns within database migrations may be highly relevant in a broader sense.
You don't need to know these things to answer the original question. However, knowing when a given solution fits a specific use case and when it doesn't is important too.
I'm struggling to get a groupby on a collection to work - I'm not getting the concept just yet.
I'm pulling a collection of results from a table for a player the eloquent collection will have data like this:
['player_id'=>1, 'opposition_id'=>10, 'result'=>'won', 'points'=>2],
['player_id'=>1, 'opposition_id'=>11, 'result'=>'lost', 'points'=>0],
['player_id'=>1, 'opposition_id'=>12, 'result'=>'lost', 'points'=>0],
['player_id'=>1, 'opposition_id'=>10, 'result'=>'won', 'points'=>2],
['player_id'=>1, 'opposition_id'=>11, 'result'=>'lost', 'points'=>0],
['player_id'=>1, 'opposition_id'=>10, 'result'=>'lost', 'points'=>0],
['player_id'=>1, 'opposition_id'=>12, 'result'=>'won', 'points'=>2],
I want to be able to groupBy('opposition_id') and then give me a count of results in total, total won, total lost and sum of points to end up with a collection like this:
['opposition_id'=>10, 'results'=>3, 'won'=>2, 'lost'=>1, 'points'=>4],
['opposition_id'=>11, 'results'=>2, 'won'=>0, 'lost'=>2, 'points'=>0],
['opposition_id'=>10, 'results'=>2, 'won'=>1, 'lost'=>1, 'points'=>2]
I'm trying to avoid going back to the database to do this as I already have the results from previous activity.
How can I do this using Laravel collection methods, So far all I have is:
$stats = $results->groupBy('opposition_id');
I've looked at map() but do not yet understand that method to work through a solution
Can anyone point me in the right direction please.
Happy to go back to the database if needed but assumed I could do this with the collection I already have rather than create another query. Solutions I've found on here all appear to be providing a solution in the query
Thank you
Take a look here, working code with explanation in comments.
// make a collection
$c = collect(
[
['player_id' => 1, 'opposition_id' => 10, 'result' => 'won', 'points' => 2],
['player_id' => 1, 'opposition_id' => 11, 'result' => 'lost', 'points' => 0],
['player_id' => 1, 'opposition_id' => 12, 'result' => 'lost', 'points' => 0],
['player_id' => 1, 'opposition_id' => 10, 'result' => 'won', 'points' => 2],
['player_id' => 1, 'opposition_id' => 11, 'result' => 'lost', 'points' => 0],
['player_id' => 1, 'opposition_id' => 10, 'result' => 'lost', 'points' => 0],
['player_id' => 1, 'opposition_id' => 12, 'result' => 'won', 'points' => 2]
]
);
// this only splits the rows into groups without any thing else.
// $groups will be a collection, it's keys are 'opposition_id' and it's values collections of rows with the same opposition_id.
$groups = $c->groupBy('opposition_id');
// we will use map to cumulate each group of rows into single row.
// $group is a collection of rows that has the same opposition_id.
$groupwithcount = $groups->map(function ($group) {
return [
'opposition_id' => $group->first()['opposition_id'], // opposition_id is constant inside the same group, so just take the first or whatever.
'points' => $group->sum('points'),
'won' => $group->where('result', 'won')->count(),
'lost' => $group->where('result', 'lost')->count(),
];
});
// if you don't like to take the first opposition_id you can use mapWithKeys:
$groupwithcount = $groups->mapWithKeys(function ($group, $key) {
return [
$key =>
[
'opposition_id' => $key, // $key is what we grouped by, it'll be constant by each group of rows
'points' => $group->sum('points'),
'won' => $group->where('result', 'won')->count(),
'lost' => $group->where('result', 'lost')->count(),
]
];
});
// here $groupwithcount will give you objects/arrays keyed by opposition_id:
[
10 => ["opposition_id" => 10,"points" => 4,"won" => 2,"lost" => 1]
11 => ["opposition_id" => 11,"points" => 0,"won" => 0,"lost" => 2]
12 => ["opposition_id" => 12,"points" => 2,"won" => 1,"lost" => 1]
]
// if you use $groupwithcount->values() it'll reset the keys to 0 based sequence as usual:
[
0 => ["opposition_id" => 10,"points" => 4,"won" => 2,"lost" => 1]
1 => ["opposition_id" => 11,"points" => 0,"won" => 0,"lost" => 2]
2 => ["opposition_id" => 12,"points" => 2,"won" => 1,"lost" => 1]
]
I have the following Array of Hashes:
a = [{:a => 1, :b => "x"}, {:a => 2, :b => "y"}]
I need to turn it into:
z={"x" => 1, "y" => 2}
or:
z={1 => "x", 2 => "y"}
Can I do this in a clean and functional way?
Something like this:
Hash[a.map(&:values)] # => {1=>"x", 2=>"y"}
if you want the other way:
Hash[a.map(&:values).map(&:reverse)] # => {"x"=>1, "y"=>2}
incorporating the suggestion from #squiguy:
Hash[a.map(&:values)].invert
I'd like to merge the following hashes together.
h1 = {"201201" => {:received => 2}, "201202" => {:received => 4 }}
h2 = {"201201" => {:closed => 1}, "201202" => {:closed => 1 }}
particularly, my expected result is:
h1 = {"201201" => {:received => 2, :closed => 1}, "201202" => {:received => 4, :closed => 1 }}
I have tried every way as:
h = h1.merge(h2){|key, first, second| {first , second} }
unfortunately, neither seemed to work out fine for me.
any advice would be really appreciated.
This should work for you:
h = h1.merge(h2){|key, first, second| first.merge(second)}
Say for example I've got a collection like this:
[{"name" => "Ganesh", "magic_number" => 7}, {"name" => "Comrade", "magic_number" => 2}...]
How can I change the value of ALL the magic_numbers in the collection to be the same value (e.g. 8)?
I'm sure it's using something like map or collect but I can't seem to do it at the moment and return me the whole collection with the changes, just one or the other...
Just use .each:
a = [{"name" => "Ganesh", "magic_number" => 7}, {"name" => "Comrade", "magic_number" => 2} ]
a.each { |x| x['magic_number'] = 8 }
# a is now [{"magic_number"=>8, "name"=>"Ganesh"}, {"magic_number"=>8, "name"=>"Comrade"}]
The argument to the block is a reference to the original elements so you can change them as desired. Note that this changes a in-place which I think is what you're after.
This works:
x = [{"name" => "Ganesh", "magic_number" => 7}, {"name" => "Comrade", "magic_number" => 2}]
x.map{|i| i["magic_number"] = 0; i }
=> [{"magic_number"=>0, "name"=>"Ganesh"}, {"magic_number"=>0, "name"=>"Comrade"}]