Acting on filtered elements in a list - enums

I have a collection of records from a data source denoting {:person, _} for person, {:dog, _} for dog, etc. I'd like to return a modified version of the record based on the condition that the record is that of ':person'.
db = [
{:dog, %{name: "Sparky", age: 4}},
{:person, %{name: "Jeff", age: 34}},
{:person, %{name: "Suzan", age: 41}},
{:dog, %{name: "Bella", age: 8}}
]
I'd like to return:
[{:person, "Jeff", 34}, {:person, "Suzan", 41}]
I've tried:
db |> Enum.map(fn {:person, data} -> {:person, data.name, data.age} end)
But I'm erroring on non-match for :dog
Any advice?

One easy way to do this is by using a Comprehension.
for {:person, data} <- db, do: {:person, data.name, data.age}
Comprehensions are a powerful tool in Elixir that allow you to filter out data from a list with a match spec (among other things). The above code snippet will only iterate over items in the list that are 2 element tuples, having the first element be the :person atom. All other entries are filtered out. Of course, comprehensions are more powerful than this. If you wanted to enforce that all records returned included a name and age attribute you could do something like this:
for {:person, %{age: age, name: name}} <- db,
is_integer(age),
is_binary(name),
do: {:person, name, age}
The above will only return entries that are 2 element tuples, having :person as the first element, :age and :name keys in the data, and where age is an integer and name is a binary.

First You need to filter out elements You want to use, so it should be like this
db |> Enum.filter(fn data -> match?({:person, _}, data) end)
|> Enum.map(fn {:person, data} -> {:person, data.name, data.age} end)
Map is just taking input elements and creates new Enum with returned values from function given as argument to map function.
Filter creates new Enum which contains only element where given function return true

Related

Efficiently resolving belongs_to relationship in elixir dataloader?

Is it possible to use elixir dataloader to query a belongs_to relationship efficiently? It seems that the load is querying all of the items it needs, but the get is returning the first value of the loaded items regardless of which single item it actually needs. This is the code that I am using now:
field :node, :node_object, resolve: fn parent, _, %{context: %{loader: loader}} ->
# parent.node_id = 1, but concurrently also another parent.node_id = 5
loader
|> Dataloader.load(NodeContext, :node, parent) # loads node_id 5 and 1
|> on_load(fn loader ->
loader
|> Dataloader.get(NodeContext, :node, parent) # always returns the node with id = 5
|> (&{:ok, &1}).()
end)
end
My current work around is to use the following code, but it makes the code much uglier and unfriendly with the Ecto schemas since I need to explicitly specify the node schema and node_id field of the parent schema here instead of letting dataloader infer it from the existing ecto schemas:
field :node, :node_object, resolve: fn parent, _, %{context: %{loader: loader}} ->
loader
|> Dataloader.load(NodeContext, {:one, NodeSchema}, id: parent.node_id)
|> on_load(fn loader ->
loader
|> Dataloader.get(NodeContext, {:one, NodeSchema}, id: parent.node_id)
|> (&{:ok, &1}).()
end)
end
I was able to fix this by making the node_id a primary_key of the parent schema like this:
defmodule MyApp.ParentSchema do
use Ecto.Schema
alias MyApp.NodeSchema
#primary_key false
embedded_schema do
belongs_to :node, NodeSchema, primary_key: true
end
end
I'm not sure if this is intended behavior for the dataloader since it seems like the primary_key check should happen on the child object instead of the parent object.

How can i mutate a list in elixir which I am iterating using Enum.map ? or need opinion on using nested recursion

I have two lists in elixir. One list (list1) has values which get consumed in another list (list2). So I need to iterate over list2 and update values in list1 as well as list2.
list1 = [
%{
reg_no: 10,
to_assign: 100,
allocated: 50
},
%{
reg_no: 11,
to_assign: 100,
allocated: 30
},
%{
reg_no: 12,
to_assign: 100,
allocated: 20
}
]
list2 = [
%{
student: student1,
quantity: 60,
reg_nos: [reg_no_10, reg_no_11]
},
%{
student: student2,
quantity: 40,
reg_nos: [reg_no_11, reg_no_12]
},
%{
student: student3,
quantity: 30,
reg_nos: nil
}
]
I need to assign values from list1 to quantity field of list2 till quantity is fulfilled. e.g. student1 quantity is 60 which will need reg_no 10 and reg_no 11.
With Enum.map I cannot pass updated list1 for 2nd iteration of list2 and assign value reg_nos: reg_no_11, reg_no_12for student2.
So, my question is how can I send updated list1 to 2nd iteration in list2?
I am using recursion to get quantity correct for each element in list2. But again, should I use recursion only to send updated list1 in list2? With this approach, there will be 2 nested recursions. Is that a good approach?
If I understand your question correctly, you want to change values in a given list x, based on a list of values in another list y.
What you describe is not possible in a functional language due to immutability, but you can use a reduce operation where x is the state or so-called "accumulator".
Below is an example where I have a ledger with bank accounts, and a list with transactions. If I want to update the ledger based on the transactions I need to reduce over the transactions and update the ledger per transaction, and pass the updated ledger on to the next transaction. This is the problem you are seeing as well.
As you can see in the example, in contrast to map you have a second parameter in the user-defined function (ledger). This is the "state" you build up while traversing the list of transactions. Each time you process a transaction you have a change to return a modified version of the state. This state is then used to process the second transaction, which in turn can change it as well.
The final result of a reduce call is the accumulator. In this case, the updated ledger.
def example do
# A ledger, where we assume the id's are unique!
ledger = [%{:id => 1, :amount => 100}, %{:id => 2, :amount => 50}]
transactions = [{:transaction, 1, 2, 10}]
transactions
|> Enum.reduce(ledger, fn transaction, ledger ->
{:transaction, from, to, amount} = transaction
# Update the ledger.
Enum.map(ledger, fn entry ->
cond do
entry.id == from -> %{entry | amount: entry.amount - amount}
entry.id == to -> %{entry | amount: entry.amount + amount}
end
end)
end)
end

Rxjs GroupBy, Reduce in order to Pivot on ID

I'm looking for a bit of help understanding this example taken from the rxjs docs.
Observable.of<Obj>({id: 1, name: 'aze1'},
{id: 2, name: 'sf2'},
{id: 2, name: 'dg2'},
{id: 1, name: 'erg1'},
{id: 1, name: 'df1'},
{id: 2, name: 'sfqfb2'},
{id: 3, name: 'qfs1'},
{id: 2, name: 'qsgqsfg2'}
)
.groupBy(p => p.id, p => p.name)
.flatMap( (group$) => group$.reduce((acc, cur) => [...acc, cur], ["" + group$.key]))
.map(arr => ({'id': parseInt(arr[0]), 'values': arr.slice(1)}))
.subscribe(p => console.log(p));
So the aim here is to group all the items by id and produce an object with a single ID and a values property which includes all the emitted names with matching IDs.
The second parameter to the groupBy operator identifies the return value. Effectively filtering the emitted object's properties down to the name. I suppose the same thing could be achieved by mapping the observable beforehand. Is it possible to pass more than one value to the return value parameter?
The line I am finding very confusing is this one:
.flatMap( (group$) => group$.reduce((acc, cur) => [...acc, cur], ["" + group$.key]))
I get that we now have three grouped observables (for the 3 ids) that are effectively arrays of emitted objects. With each grouped observable the aim of this code is to reduce it an array, where the first entry in the array is the key and subsequent entries in the array are the names.
But why is the reduce function initialized with ["" + group$.key], rather than just [group$.key]?
And why is this three dot notation [...acc, cur] used when returning the reduced array on each iteration?
But why is the reduce function initialized with ["" + group$.key], rather than just [group$.key]?
The clue to answer this question is in the .map() function a bit further down in the code.
.map(arr => ({'id': parseInt(arr[0]), 'values': arr.slice(1)}))
^^^^^^^^
Note the use parseInt. Without the "" + in the flatMap this simply wouldn't compile since you'd be passing a number type to a function that expects a string. Remove the parseInt and just use arr[0] and you can remove "" + as well.
And why is this three dot notation [...acc, cur] used when returning
the reduced array on each iteration?
The spread operator here is used to add to the array without mutating the array. But what does it do? It will copy the original array, take all the existing elements out of the array, and deposit the elements in the new array. In simpler words, take all elements in acc, copy them to a new array with cur in the end. Here is a nice blog post about object mutation in general.

How do I dynamically name a collection?

Title: How do I dynamically name a collection?
Pseudo-code: collect(n) AS :Label
The primary purpose of this is for easy reading of the properties in the API Server (node application).
Verbose example:
MATCH (user:User)--(n)
WHERE n:Movie OR n:Actor
RETURN user,
CASE
WHEN n:Movie THEN "movies"
WHEN n:Actor THEN "actors"
END as type, collect(n) as :type
Expected output in JSON:
[{
"user": {
....
},
"movies": [
{
"_id": 1987,
"labels": [
"Movie"
],
"properties": {
....
}
}
],
"actors:" [ .... ]
}]
The closest I've gotten is:
[{
"user": {
....
},
"type": "movies",
"collect(n)": [
{
"_id": 1987,
"labels": [
"Movie"
],
"properties": {
....
}
}
]
}]
The goal is to be able to read the JSON result with ease like so:
neo4j.cypher.query(statement, function(err, results) {
for result of results
var user = result.user
var movies = result.movies
}
Edit:
I apologize for any confusion in my inability to correctly name database semantics.
I'm wondering if it's enough just to output the user and their lists of both actors and movies, rather than trying to do a more complicated means of matching and combining both.
MATCH (user:User)
OPTIONAL MATCH (user)--(m:Movie)
OPTIONAL MATCH (user)--(a:Actor)
RETURN user, COLLECT(m) as movies, COLLECT(a) as actors
This query should return each User and his/her related movies and actors (in separate collections):
MATCH (user:User)--(n)
WHERE n:Movie OR n:Actor
RETURN user,
REDUCE(s = {movies:[], actors:[]}, x IN COLLECT(n) |
CASE WHEN x:Movie
THEN {movies: s.movies + x, actors: s.actors}
ELSE {movies: s.movies, actors: s.actors + x}
END) AS types;
As far as a dynamic solution to your question, one that will work with any node connected to your user, there are a few options, but I don't believe you can get the column names to be dynamic like this, or even the names of the collections returned, though we can associate them with the type.
MATCH (user:User)--(n)
WITH user, LABELS(n) as type, COLLECT(n) as nodes
WITH user, {type:type, nodes:nodes} as connectedNodes
RETURN user, COLLECT(connectedNodes) as connectedNodes
Or, if you prefer working with multiple rows, one row each per node type:
MATCH (user:User)--(n)
WITH user, LABELS(n) as type, COLLECT(n) as collection
RETURN user, {type:type, data:collection} as connectedNodes
Note that LABELS(n) returns a list of labels, since nodes can be multi-labeled. If you are guaranteed that every interested node has exactly one label, then you can use the first element of the list rather than the list itself. Just use LABELS(n)[0] instead.
You can dynamically sort nodes by label, and then convert to the map using the apoc library:
WITH ['Actor','Movie'] as LBS
// What are the nodes we need:
MATCH (U:User)--(N) WHERE size(filter(l in labels(N) WHERE l in LBS))>0
WITH U, LBS, N, labels(N) as nls
UNWIND nls as nl
// Combine the nodes on their labels:
WITH U, LBS, N, nl WHERE nl in LBS
WITH U, nl, collect(N) as RELS
WITH U, collect( [nl, RELS] ) as pairs
// Convert pairs "label - values" to the map:
CALL apoc.map.fromPairs(pairs) YIELD value
RETURN U as user, value

How to combine rows using LINQ?

Say I have an entity with following properties [Id, UserName, ProductName], where Id is the PK, and other fields are not unique, so the rows with same UserName repeat multiple times.
For one of the views I need to get a collection that would have unique UserName, and other fields would be combined together using string concatenation or something similar.
If I have
[0, John, Alpha]
[1, Mary, Beta]
[2, John, Gamma]
I need a query that would get me a collection like
[John, Alpha Gamma]
[Mary, Beta]
And it would be awesome if all that could be accomplished on the database side without loading the entities.
You are looking for GroupBy():
var results = context.MyEntities.GroupBy( x => x.UserName);
foreach (var item in results)
{
Console.WriteLine("{0} : {1}", item.Key, string.Join(",", item.Select( x=> x.ProductName));
}

Resources