Enum.reduce with Ecto.Multi - phoenix-framework

I want to reduce a map of the form
%{"overall" => "2016-08-27", "2" => "2016-06-27", "8" => "2015-08-27"} with these two functions
def set_agreed_date(id, agreed_date) do
Repo.get(Job, id)
|> change(agreed_date: agreed_date)
# |> Repo.update # removed per Dogbert's comment
end
def update(conn, %{"id" => job_id, "agreed_dates" => agreed_dates}, user) do
update = Enum.reduce(agreed_dates, Multi.new, fn({k, v}, acc) ->
{:ok, d} = Ecto.Date.cast v
case k do
"overall" ->
Multi.update(acc, "overall", set_agreed_date(job_id, d))
_ ->
Multi.update(acc, k, ShipToController.set_agreed_date(k, d))
end
end)
case Repo.transaction(update) do
{:ok, ?? not sure what I will get here ??} -> ...
but I am getting
[error] #PID<0.972.0> running Api.Endpoint terminated
Server: localhost:4000 (http)
Request: PUT /api/abc/14
** (exit) an exception was raised:
** (FunctionClauseError) no function clause matching in Ecto.Multi.add_operation/3
(ecto) lib/ecto/multi.ex:331: Ecto.Multi.add_operation(%Ecto.Multi{names: #MapSet<[]>, operations: []}, "overall", {:changeset, #Ecto.Changeset<action: :update, changes: %{agreed_date: #Ecto.Date<2016-08-27>}, errors: [], data: #Api.Job<>, valid?: true>, []})
(stdlib) lists.erl:1263: :lists.foldl/3
(api) web/controllers/controller.ex:71: Api.Controller.update/3
(api) web/controllers/controller.ex:1: Api.Controller.action/2
(api) web/controllers/controller.ex:1: Api.Controller.phoenix_controller_pipeline/2
(api) lib/api/endpoint.ex:1: Api.Endpoint.instrument/4
(api) lib/phoenix/router.ex:261: Api.Router.dispatch/2
(api) web/router.ex:1: Api.Router.do_call/2
(api) lib/api/endpoint.ex:1: Api.Endpoint.phoenix_pipeline/1
(api) lib/plug/debugger.ex:93: Api.Endpoint."call (overridable 3)"/2
(api) lib/api/endpoint.ex:1: Api.Endpoint.call/2
(plug) lib/plug/adapters/cowboy/handler.ex:15: Plug.Adapters.Cowboy.Handler.upgrade/4
(cowboy) src/cowboy_protocol.erl:442: :cowboy_protocol.execute/4

Related

InfluxDB 2.0 Flux - How to Handle Division by Zero

Hi I am trying to do a simple success rate calculation between metrics. By dividing the number of successful requests by the number of attempts. The problem is some intervals are empty where both metrics are 0. When I write the query I get the below “cannot divide by zero” runtime error. In SQL there a NULLIF function to avoid this. Is there something similar in flux or is there an alternate method to avoid division by zero?
Error: runtime error #7:6-7:90: map: failed to evaluate map function: cannot divide by zero
My Sample Query:
from(bucket: "my_db")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "HTTPRequests")
|> filter(fn: (r) => r["_field"] == "RequestAttempt" or r["_field"] == "RequestSuccess_E")
|> filter(fn: (r) => r["host"] == "host-a")
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with Request_SR: r.RequestSuccess_E/r.RequestAttempt }))
Thanks in advance.
Influx Team Answered my Q. This worked for my case.
You could use some simple conditional logic to check the values of ReqeustSuccess and RequestAttempt. You can check if they’re null or 0. You do need to provide a default value for when the operands or null or zero because you can’t map a null value.
from(bucket: "my_db")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "HTTPRequests")
|> filter(fn: (r) => r["_field"] == "RequestAttempt" or r["_field"] == "RequestSuccess_E")
|> filter(fn: (r) => r["host"] == "host-a")
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with
Request_SR:
if not exists r.RequestSuccess_E or r.RequestSuccess_E == 0.0 then 0.0
else if not exists r.RequestAttempt or r.RequestAttempt == 0.0 then 0.0
else r.RequestSuccess_E / r.RequestAttempt
}))

RxJS which operator to use for "onIdle"?

My use case is as follows - i have a stream of operations for distinct elements and i want to only call "commit" on each object if they have been idle for a certain amount of time OR a different element is received.
I was trying around with groupBy and debounce, but did not get all the cases covered - e.g.
action.pipe(
groupBy(item -> item.key),
debounceTime(1000),
mergeMap(item -> {
item.commit()})
)
I'm not sure what is your goal:
Let's take the example of a situation where you have A => B => A coming within less than the minimum idle time
Option 1: each type of element should have each own idle-state - the second emission of type A will be ignored
Option 2. since there is no consecutive sequence the second A will not be ignored
OPTION 1 example:
action.pipe(
groupBy(item => item.key),
mergeMap(group => group.pipe(debounceTime(1000))),
mergeMap(item => item.commit())
)
Optionally:
const IDLE_TIME = XXXX;
action.pipe(
groupBy(item => item.key),
mergeMap(group => merge(
group.pipe(first()),
group.pipe(
timeInterval(),
filter(x => x.interval > IDLE_TIME),
map(x => x.value)
)
)),
mergeMap(item => item.commit())
)
OPTION 2 example:
action.pipe(
pairwise(),
debounce(([previous, current]) => previous.key == current.key? timer(1000) : EMPTY),
map(([pre, current]) => current),
mergeMap(item => item.commit())
)
You can assess the idle nature using auditTime, scan and filter
action.pipe(
//add the idle property to the item
map(item => ({ ...item, idle: false})),
//audit the stream each second
auditTime(1000),
//then use scan to with previous emission at audit time
scan(
(prev, curr) => {
//then if the key remains the same then we have an idle state
if (prev.key === curr.key) {
//return changed object to indicate we have an idle state
return Object.assign({}, curr, {idle: true});
} else {
//otherwise just return the non idle item
return curr
}
//seed with an object that cannot match the first emission key
}, { key: null }
),
//then filter out all emissions indicated as not idle
filter(item => item.idle === true)
//and commit
mergeMap(item => item.commit())
)
Then you can use distinctUntilKeyChanged to achieve the second condition
action.pipe(
distinctUntilKeyChanged('key'),
mergeMap(item => item.commit())
)
I'm not familiar with redux-observable but you would typically merge these two observables and then commit at the end.

Inner Hits isn't working ElasticSearch

Good Day:
I'm using ElasticSearch/NEST to query against nested objects. What I realize is that my nested object is empty however, the parent is being returned despite there being now match.
ISearchResponse<Facility> responses = await this._elasticClient.SearchAsync<Facility>(a => a.Query(q =>
q.Bool(b =>
b.Must(m =>
m.Nested(n =>
n.Query(nq =>
nq.Term(t =>t.Field(f => f.Reviews.First().UserId).Value(user.Id))
).InnerHits(ih => ih.From(0).Size(1).Name("UserWithReview"))
)
)
)
));
When I look at the generated query, I"m even more confused what is happening:
Successful low level call on POST: /dev/doc/_search?typed_keys=true
# Audit trail of this API call:
- [1] HealthyResponse: Node: http://localhost:9200/ Took: 00:00:00.9806442
# Request:
{}
As you can see the request is empty.
You haven't defined the nested query with all the properties needed; it's missing the Path property, which tells Elasticsearch which document field (i.e. path) to execute the query on. Looking at the rest of the query, it looks like this should be the Reviews property
ISearchResponse<Facility> responses =
await this._elasticClient.SearchAsync<Facility>(a => a
.Query(q => q
.Bool(b => b
.Must(m => m
.Nested(n => n
.Path(f => f.Reviews) // <-- missing
.Query(nq => nq
.Term(t => t
.Field(f => f.Reviews.First().UserId)
.Value(user.Id)
)
)
.InnerHits(ih => ih.From(0).Size(1).Name("UserWithReview"))
)
)
)
)
);

Oracle decode logic implementation using Slick

I have following problem - there is sql with DECODE oracle function:
SELECT u.URLTYPE, u.URL
FROM KAA.ENTITYURLS u
JOIN KAA.ENTITY e
ON decode(e.isurlconfigured, 0, e.urlparentcode, 1, e.CODE,
NULL)=u.ENTITYCODE
JOIN CASINO.Casinos c ON e.casinocode = c.code
WHERE e.NAME = $entityName
AND C.NAME = $casinoName
I'm trying to realize this sql in my slick code , like:
val queryUrlsEntityName = for {
entityUrl <- entityUrls
entity <- entities.filter(e => e.name.trim===entityName &&
entityUrl.entityCode.asColumnOf[Option[Int]]==(e.isURLConfigured match
{
case Some(0) => e.urlParentCode
case Some(1) => e.code.asColumnOf[Option[Int]]
case _ => None
}
)
)
casino <- casinos.filter(_.name.trim===casinoName) if
entity.casinoCode==casino.code
} yield (entityUrl)
But I don't understand how can I implement of matching of values in line
case Some(0) => e.urlParentCode
because I'm getting error
constructor cannot be instantiated to expected type;
[error] found : Some[A]
[error] required: slick.lifted.Rep[Option[Int]]
[error] case Some(0) => e.urlParentCode
Thanks for any advice
You should rewrite your code in pattern-matching section so you could compare required Rep[Option[Int]] - to left type, in your case it's Option[Int], or transform Rep[Option[Int]] to Option[Int] type. Rep is only the replacement to the column datatype in slick. I would prefer the first variant - this answer shows how to make the transformation from Rep, or you can use map directly:
map(u => (u.someField)).result.map(_.headOption.match {
case Some(0) => .....
})

Issue with nested loops in Ruby

So I right now I have a list of edges of a graph. With this list, I'm trying to build a adjacency map, in which its key is an edge and the value is a list of edges that are adjacent to the edge. I'm using a nested loop that compares each edge in the list to every other edge in the list. However, this is not building a map that I expect it to. Below is my code:
def build_adjacency
#paths.each do |start_path|
#paths.each do |end_path|
# 3 cases for edge adcency
if (start_path.end_node == end_path.start_node) || (start_path.start_node == end_path.end_node) || (start_path.start_node == end_path.start_node)
if #adjacency_map.has_key?("#{start_path}".to_s)
#adjacency_map[:"#{start_path}".to_s] << end_path
else
value = [end_path]
#adjacency_map[:"#{start_path}".to_s] = value
end
end
end
end
end
I also tried array.combination but that is not working either. Thank you for the help.
Test input: (start node, end node, color, type)
A B R C
B E B C
B C B T
C D G T
Output for #adjacency_map:
C:\Users\Jin\Documents\Mines\Algorithms (CSCI 406)\Project_3>ruby graph.rb
Key: A B R C Value: [#<Path:0x2548a28 #start_node="A", #end_node="B", #color=
"R", #type="C">, #<Path:0x2548968 #start_node="B", #end_node="E", #color="B", #t
ype="C">, #<Path:0x25488a8 #start_node="B", #end_node="C", #color="B", #type="T"
>, #<Path:0x25487e8 #start_node="C", #end_node="D", #color="G", #type="T">]
Key: B E B C Value: [#<Path:0x2548a28 #start_node="A", #end_node="B", #color=
"R", #type="C">, #<Path:0x2548968 #start_node="B", #end_node="E", #color="B", #t
ype="C">, #<Path:0x25488a8 #start_node="B", #end_node="C", #color="B", #type="T"
>, #<Path:0x25487e8 #start_node="C", #end_node="D", #color="G", #type="T">]
Key: B C B T Value: [#<Path:0x2548a28 #start_node="A", #end_node="B", #color=
"R", #type="C">, #<Path:0x2548968 #start_node="B", #end_node="E", #color="B", #t
ype="C">, #<Path:0x25488a8 #start_node="B", #end_node="C", #color="B", #type="T"
>, #<Path:0x25487e8 #start_node="C", #end_node="D", #color="G", #type="T">]
Key: C D G T Value: [#<Path:0x2548a28 #start_node="A", #end_node="B", #color=
"R", #type="C">, #<Path:0x2548968 #start_node="B", #end_node="E", #color="B", #t
ype="C">, #<Path:0x25488a8 #start_node="B", #end_node="C", #color="B", #type="T"
>, #<Path:0x25487e8 #start_node="C", #end_node="D", #color="G", #type="T">]
The following is strange:
:"#{start_path}".to_s
What ever class your object start_path is, you convert it to a string via interpolation, than convert it to a symbol, only to then convert it to a string again. You can just use strings as hash keys. And in your call to has_key? you have not used the colon. (should normally make no difference)
Also, if you unsure about you implemented the condition correctly, I recommend to create a method to encapsulate it. Especially when the condition has a semantic meaning.
It seems that only problem with your code that it's not checking if start_path and end_path are the same. So, it adds unnecessary "A-B is adjacent to A-B" into your map.
Maybe you should try just add one line?
def build_adjacency
#paths.each do |start_path|
#paths.each do |end_path|
next if start_path == end_path # this one!
# 3 cases for edge adcency
if (start_path.end_node == end_path.start_node) || (start_path.start_node == end_path.end_node) || (start_path.start_node == end_path.start_node)
if #adjacency_map.has_key?("#{start_path}".to_s)
#adjacency_map[:"#{start_path}".to_s] << end_path
else
value = [end_path]
#adjacency_map[:"#{start_path}".to_s] = value
end
end
end
end
end
Here's my full code, reproducing the solution: https://gist.github.com/zverok/6785c213fd78430cd423

Resources