Caching api calls in clojure - caching

I want to implement caching of api in clojure, In my application I have api's which are called for some of the functionalities. I want to reduce that api calls. I want to use clojure.core.cache.wrapped which is implemented upon clojure.core.cache. I want to cache my api call response base on the url.
The url is GET and has query inside the url that differentiates the response
for eg
http://localhost:3000/:clientid/get_data
Sample code
(:require [clojure-mauth-client.request :refer [get!]]
[clojure.core.cache.wrapped :as cw])
(def my-cache (cw/ttl-cache-factory {} :ttl 60000))
(defn get-data-caller [cid]
(cw/lookup-or-miss my-cache cid get-data))
(defn get-data [cid]
(let [req-url (str "/api/get-data?id=" cid)
response (retry-request (sign-credentials #(get! base-url req-url)) 3)]
(println response))))
I want to implement in a way that it caches depending on the cid.
In above code 3 is max-retries
With current implementation I am getting below error.
In my current code it is calling the api again and again

I got the solution, The main mistake I made here is that I implemented this in
get-data-caller
lookup-or-miss actually accepts 3 params
lookup-or-miss [cache key fn]
Here
1. cache is the one that we create.
2. key that we want to use as 'key' in our caching
3. The third has to be the function, that takes 'key' as an arg and gets data for us.
So lookup-or-miss will first check if the we have cached data for the 'key' passed, if not then, that will be passed as an arg to the third arg (i.e the fn) and get the fresh data.
If the cache data is not present for the key, then the fn in the third arg will be called with key as arg to get the data.
With above understanding I did rewrite my code as below
(:require [clojure-mauth-client.request :refer [get!]]
[clojure.core.cache.wrapped :as cw])
(def my-cache (cw/ttl-cache-factory {} :ttl 60000))
(defn http
[url]
(retry-request (sign-credentials #(get! url)) 3))
(defn get-data-caller [cid]
(get-data cid))
(defn get-data [cid]
(let [req-url (str "/api/get-data?id=" cid)
response (cw/lookup-or-miss my-cache req-url http-request)]
(println response))))
So here lookup-or-miss will search req-url key in my-cache, if present it will directly return the value stored, if not then it will call http-request with req-url as an arg
So lookup-or-miss will be executed something like this;
pseudo code for understanding
(if (contains? my-cache req-url)
(:req-url my-cache)
(http-request req-url))

Related

how to iterate over a list of values returning from ops to jobs in dagster

I am new to the dagster world and working on ops and jobs concepts. \
my requirement is to read a list of data from config_schema and pass it to #op function and return the same list to jobs. \
The code is show as below
#op(config_schema={"table_name":list})
def read_tableNames(context):
lst=context.op_config['table_name']
return lst
#job
def write_db():
tableNames_frozenList=read_tableNames()
print(f'-------------->',type(tableNames_frozenList))
print(f'-------------->{tableNames_frozenList}')
when it accepts the list in #op function, it is showing as a frozenlist type but when i tried to return to jobs it conver it into <class 'dagster._core.definitions.composition.InvokedNodeOutputHandle'> data type
My requirement is to fetch the list of data and iterate over the list and perform some operatiosn on individual data of a list using #ops
Please help to understand this
Thanks in advance !!!
When using ops / graphs / jobs in Dagster it's very important to understand that the code defined within a #graph or #job definition is only executed when your code is loaded by Dagster, NOT when the graph is actually executing. The code defined within a #graph or #job definition is essentially a compilation step that only serves to define the dependencies between ops - there shouldn't be any general-purpose python code within those definitions. Whatever operations you want to perform on data flowing through your job should take place within the #op definitions. So if you wanted to print the values of your list that is be input via a config schema, you might do something like
#op(config_schema={"table_name":list})
def read_tableNames(context):
lst=context.op_config['table_name']
context.log.info(f'-------------->',type(tableNames_frozenList'))
context.log.info(f'-------------->{tableNames_frozenList}')
here's an example using two ops to do this data flow:
#op(config_schema={"table_name":list})
def read_tableNames(context):
lst=context.op_config['table_name']
return lst
#op
def print_tableNames(context, table_names):
context.log.info(f'-------------->',type(table_names)
#job
def simple_flow():
print_tableNames(read_tableNames())
Have a look at some of the Dagster tutorials for more examples

How do I pass connection-time websocket parameters to Phoenix from Elm?

I was following along Programming Phoenix but using Elm for my front end, rather than Javascript. The second part of that book describes how to use websockets. The book's running example has you create an authentication token for the client side to pass to Phoenix at connection creation time. The Javascript Socket class provided with Phoenix allows that, but there's no obvious way to do it in Elm (as of 0.17 and the date of this question).
As in the book, make the token visible to Javascript by attaching it to window.
<script>window.auth_token = "<%= assigns[:auth_token] %>"</script>
In web/static/js/app.js, you'll have code that starts Elm. Pass the token there.
const c4uDiv = document.querySelector('#c4u-target');
if (c4uDiv) {
Elm.C4u.embed(c4uDiv, {authToken: window.auth_token});
}
On the Elm side, you'll use programWithFlags instead of program.
Your init function will take a flags argument. (I'm using the Navigation library for a single-page app, which is why there's a PageChoice argument as well.)
type alias Flags =
{ authToken : String
}
init : Flags -> MyNav.PageChoice -> ( Model, Cmd Msg )
Within init, tack on the token as a URI query pair. Note that you have to uri-encode because the token contains odd characters. Here's the crude way to do that. Note: I am using the elm-phoenix-socket library below, but the same hackery would be required with others.
let
uri = "ws://localhost:4000/socket/websocket?auth_token=" ++
(Http.uriEncode flags.authToken)
in
uri
|> Phoenix.Socket.init
|> Phoenix.Socket.withDebug
|> Phoenix.Socket.on "ping" "c4u" ReceiveMessage
I got here by a Tweet by Brian, about encoding from Elm.
In this case I like to handle it from the JavaScript side.
I tried to replicate the way the Phoenix client sets it up.
Instead of passing the token I passed the complete endpoint...
I've put the token in JSON a hash
<script id="app-json" type="application/json"><%= raw #json %></script>
Which I read on the client, and pass to the Elm embed
var data = JSON.parse(document.getElementById("app-json").innerHTML)
var token = encodeURIComponent(data.token)
var elm = window.Elm.App.embed(document.getElementById("elm-container"), {
socketEndpoint: "ws://" + window.location.host + "/socket/websocket?token=" + token
})

Ajax GET with Reagent

I am doing an Ajax GET from my Reagent application, to load some stuff from the database.
I am not entirely sure what is the best way of getting the result of such ajax call to my page, considering that if I put it in an atom, then Reagent automatically re-renders a component when an atom is dereferenced, which means I get an infinite sequence of ajax calls.
For some code,
(def matches (atom nil))
(defn render-matches [ms]
(reset! matches (into [:ul] (map (fn [m] ^{:key m}[:li m])
(walk/keywordize-keys (t/read (t/reader :json) ms)))))
This function basically creates a [:ul [:li "Stuff here"] [:li "And here"]]
Which i would like displayed on my page, which now has the following code.
(defn standings-page []
(GET "/list-matches"
{:handler render-matches})
#matches)
I think it's better to save only data in an atom and to generate the HTML as part of the component logic.
Also it's better to trigger the AJAX call outside the render phase, for example, before the component will mount, or as the result of an event (on-click on a button for example).
Like this:
(def matches (atom nil))
(defn component []
(let [get-stuff (fn [] (GET "/..." :handler (fn [response]
(reset! matches (:body response))))]
(get-stuff) <-- called before component mount
(fn []
[:ul
(for [m match]
^{:key ...}
[:li ...])])))
This is called form-2 in this post.

Access the querystring in a Spiffy app

One shouldn't have to ask this here, but thanks to the bad documentation, how do I access the querystring in a Spiffy (egg) app? Thanks!
(use intarweb spiffy sxml-serializer)
(tcp-buffer-size 2048)
(server-port 80)
(handle-not-found (lambda (path)
; (print (request-uri (current-request)))
; (print (request-method (current-request)))
; (print (current-pathinfo (current-request)))
; (print (current-file))
; (print (remote-address))
(send-response
body: (serialize-sxml
`(div (# (class "page"))
(h1 ,path))
method: 'html))))
(start-server)
I'm not exactly sure why you want to access the query string, but the
URI objects in Spiffy come from the request object, as you've correctly
identified. The request object is from intarweb, which sticks an
uri-common object in the request-uri attribute.
You can access the constituent component parts using uri-common's
accessors, as documented in the URI-common egg reference
The query string is parsed into an alist for your convenience, and
accessible through (uri-query (request-uri (current-request))).
The "original" query string can be accessed a little more differently:
the uri-common egg is a convenience wrapper around the lower-level egg
uri-generic, which you can access by calling (uri->uri-generic URI),
where URI is (request-uri (current-request)), as before.
Then, you can access the query string through the uri-query procedure
from that egg. Here's a simple example you can use on the REPL:
#;1> (use uri-common (prefix uri-generic generic:))
#;2> (uri-reference "http://foo?x=y")
#<URI-common: scheme=http port=#f host="foo" path=() query=((x . "y")) fragment=#f>
#;3> (uri-query #2)
((x . "y"))
#;4> (uri->uri-generic #2)
#<<URI>>
#;5> (generic:uri-query #4)
"x=y"
What I did here is prefix all the procedures from uri-generic with
"generic:". This is necessary because uri-common "shadows" all the
procedures defined by uri-generic.

How do you view the details of a submitted task with IPython Parallel?

I'm submitting tasks using a Load Balanced View.
I would like to be able to connect from a different client and view the remaining tasks by the function and parameters that were submitted.
Forexample:
def someFunc(parm1, parm2):
return parm1 + parm2
lbv = client.load_balanced_view()
async_results = []
for parm1 in [0,1,2]:
for parm2 in [0,1,2]:
ar = lbv.apply_async(someFunc, parm1, parm2)
async_results.append(ar)
From the client I submitted this from I can figure out which result went with which function call based on their order in the async_results array.
What I would like to know is how can I figure out the function and parameters associated with a msg_id if I am retrieving the results from a different client using the queue_status or history commands to get msg_id's and the client.get_result command to retrieve the results.
These things are pickled, and stored in the 'buffers' in the hub's database. If you want to look at them, you have to fetch those buffers from the database, and unpack them.
Assuming you have a list of msg_ids, here is a way that you can reconstruct the f, args, and kwargs for all of those requests:
# msg_ids is a list of msg_id, however you decide to get that
from IPython.zmq.serialize import unpack_apply_message
# load the buffers from the hub's database:
query = rc.db_query({'msg_id' : {'$in' : msg_ids } }, keys=['msg_id', 'buffers'])
# query is now a list of dicts with two keys - msg_id and buffers
# now we can generate a dict by msg_id of the original function, args, and kwargs:
requests = {}
for q in query:
msg_id =
f, args, kwargs = unpack_apply_message(q['buffers'])
requests[q['msg_id']] = (f, args, kwargs)
From this, you should be able to associate tasks based on their function and args.
One Caveat: since f has been through pickling, often the comparison f is original_f will be False, so you have to do looser comparisons, such as f.__module__ + f.__name__ or similar.
For a bit more detail, here is an example that generates some requests,
then reconstructs and associates them based on the function and arguments having some prior knowledge of what the original requests may have looked like.

Resources