I want different handlers to set different keys in the session without affecting each other. I'm working from this wiki article, which advises using assoc. I thought I could use assoc-in to update a path in the session.
(defn handler-one
[request]
(prn "Session before one" (:session request))
(-> (response "ONE")
(content-type "text/plain")
(#(assoc-in % [:session :key-one] "one"))))
(defn handler-two
[request]
(prn "Session before two" (:session request))
(-> (response "TWO")
(content-type "text/plain")
(#(assoc-in % [:session :key-two] "two"))))
If I call handler-one repeatedly it prints Session before one {:key-one "one"} and likewise handler-two prints the previous session values.
By setting a session key using assoc-in I would expect both keys to be set, i.e. {:key-one "one" :key-two "two"}. But it appears that the entire session dictionary is replaced.
Am I doing this wrong?
You're printing the session in request, but you're assoc'ing on the (nonexistent) session in response so you end up with a session with only the last added property. You should get the session out of request, assoc into that and then return the new session as part of the response.
Related
It`s a continuation of my previous question How to produce a lazy sequence by portion in clojure?
I want download data from a database by portions. Initially I download first 500 rows and then I send a request to fetch next 500 rows and so on until I receive all data from a server.
I wrote the code:
(jdbc/atomic conn
(with-open [cursor (jdbc/fetch-lazy conn [sql_query])]
(let [lazyseq (jdbc/cursor->lazyseq cursor)
counter (atom 1)]
(swap! lazyseq_maps assoc :session_id {:get_next? (chan 1) :over_500 (chan 1) :data []})
(>!! (:get_next? (:session_id #lazyseq_maps)) true)
(go
(doseq [row lazyseq]
(swap! counter inc)
(when (<! (:get_next? (:session_id #lazyseq_maps)))
(swap! lazyseq_maps update-in [:session_id :data] conj row)
(if (not= 0 (mod #counter 500))
(>! (:get_next? (:session_id #lazyseq_maps)) true)
(>! (:over_500 (:session_id #lazyseq_maps)) true))))
;
(close! (:get_next? (:session_id #lazyseq_maps)))
(close! (:over_500 (:session_id #lazyseq_maps)))
(.close conn))
(when (<!! (:over_500 (:session_id #lazyseq_maps))) {:message "over 500 rows"
:id :session_id
:data (:data (:session_id #lazyseq_maps))}))))
I fetch rows with help of the doseq cycle. When doseq passed 500 rows I park the cycle (when (<! (:get_next? (:session_id #lazyseq_maps))) and wait for a signal from outside to retrieve next 500 rows.
But here I have a problem. When I send the signal, the program throws error "Resultset is closed". I.e connection is closed outside with-open scope. But I don`t understand why, because go block is placed inside with-open scope. Can you help me solve the problem?
(go ...) returns immediately, therefore, so does (with-open ...).
You may want to do it the other way around:
(go (with-open ...))
However, do note that this process will hold on to a database connection (a scarce resource!) for a potentially very long time, which may not be desirable, and kind of goes against the benefit of having 'lightweight' threads thanks to go blocks. Here are some alternatives to consider:
Maybe you could re-open a database connection for each batch?
Maybe you could eagerly stream the whole results set to an external store (e.g AWS S3) and have the client poll against that?
Unless you are on a seriously memory constrained system, I would recommend just load all rows at once to RAM and close the DB connection. Otherwise your complete solution will likely be very complex and difficult to test and reason about.
If you have tens of millions of rows maybe you could fetch them in some partitions?
I am successfully getting a response from an endpoint using cljs-ajax (as shown below). However, I cannot seem to differentiate between different success status codes in my response handler.
(ns mynamespace
(:require [ajax.core :as ajax]))
(defn start-monitoring []
(let [handler (fn [[ok response]]
(if ok
(.log js/console response)
(.error js/console (str response))))]
(ajax/ajax-request {:uri "/myendpoint"
:method :get
:params {:since (.getTime (js/Date.))}
:handler handler
:format (ajax/json-request-format)
:response-format (ajax/json-response-format {:keywords? true})})))
"ok" in the handler appears to simply be a true/false success flag, and does not differentiate between 200 and 204 status codes, both of which are considered successes. The response body is whatever text is returned in the response, and doesn't appear to contain a status code, unless the request failed.
How can I determine the status code of the response?
Seems like the response is a map with keys like :status which contains 200 for my test.
The rest of the keys are:
(:status :failure :response :status-text :original-text)
Use :response-format (ajax/ring-response-format).
See also: https://github.com/JulianBirch/cljs-ajax/issues/57
I use url.el to create http request by url-retrieve-synchronously. And when url is correct all is fine.
Code example:
(with-current-buffer (url-retrieve-synchronously my-url)
(hoge--log-debug "\n%s"(buffer-string)))
But:
How I can handle http response when url is incorrect.
e.g. "http://2222httpbin.org/xml" (Unknown host)?
How I can get http status response?
Apparently url-retrieve-synchronously only returns a valid buffer or nil. I don't think you can retrieve the status. Your best option is to call url-retrieve which allows you to pass a callback function, where you can access all the details.
Workaround
Looking at the processes given by list-processes, it appeared that the background process was still hanging there. Deleting it would call the callback function. So we only need to delete it ourselves when we now the process failed:
(defun url-retrieve-please (url callback)
(apply callback
(block waiter
(let ((process
(get-buffer-process
(url-retrieve url (lambda (&rest args)
(return-from waiter args))))))
;; We let a chance for the connection to establish
;; properly. When it succeeds, the callback will return
;; from the waiter block.
;; When it fails to connect, we exit this loop.
(loop until (eq 'failed (process-status process))
;; sitting leaves a chance for Emacs to handle
;; asynchronous tasks.
do (sit-for 0.1))
;; Deleting the process forces the above lambda callback
;; to be called, thanks to the process sentinel being in
;; place. In the tests, we always exit from the above
;; callback and not after the block normally exits. The
;; behaviour seems quite regular, so I don't sleep
;; forever after this command.
(delete-process process)))))
Tests
(url-retrieve-please "http://yahoo.com" (lambda (&rest args) (message "%S" args)))
"((:redirect \"https://www.yahoo.com/\" :peer (:certificate (:version 3 :serial-number \"1c:25:43:0e:d0:a6:02:e8:cc:3a:97:7b:05:39:cc:e5\" :issuer \"C=US,O=Symantec Corporation,OU=Symantec Trust Network,CN=Symantec Class 3 Secure Server CA - G4\" :valid-from \"2015-10-31\" :valid-to \"2017-10-30\" :subject \"C=US,ST=California,L=Sunnyvale,O=Yahoo Inc.,OU=Information Technology,CN=www.yahoo.com\" :public-key-algorithm \"RSA\" :certificate-security-level \"Medium\" :signature-algorithm \"RSA-SHA256\" :public-key-id \"sha1:47:16:26:79:c6:4f:b2:0f:4b:89:ea:28:dc:0c:41:6e:80:7d:59:a9\" :certificate-id \"sha1:41:30:72:f8:03:ce:96:12:10:e9:a4:5d:10:da:14:b0:d2:d4:85:32\") :key-exchange \"ECDHE-RSA\" :protocol \"TLS1.2\" :cipher \"AES-128-GCM\" :mac \"AEAD\")))"
(url-retrieve-please "http://2222httpbin.org/xml" (lambda (&rest args) (message "%S" args)))
"((:error (error connection-failed \"deleted
\" :host \"2222httpbin.org\" :service 80)))"
To retrieve the status code, use url-http-symbol-value-in-buffer on the obtained buffer.
Example:
(url-http-symbol-value-in-buffer 'url-http-response-status
(url-retrieve-synchronously "http://httpbin.org/get"))
I tried for few days, I am a little confused here.
I am using clojure http-kit to make keepalive get request.
(ns weibo-collector.weibo
(:require [org.httpkit.client :as http]
[clojure.java.io :as io]))
(def sub-url "http://c.api.weibo.com/datapush/status?subid=10542")
(defn spit-to-file [content]
(spit "sample.data" content :append true))
#(http/get sub-url {:as :stream :keepalive 3000000}
(fn [{:keys [status headers body error opts]}]
(spit-to-file body)
))
I am pretty sure that I made a persistent connection to target server, but nothing written to sample.data file.
I tried as stream and as text.
I also tried ruby version the program create a persistent connection either, still nothing written.
So typically, the target will use webhook to notify my server new data is coming, but how to I get data from the persistent connection?
---EDIT---
require 'awesome_print'
url = "http://c.api.weibo.com/datapush/status?subid=10542"
require "httpclient"
c = HTTPClient.new
conn = c.get_async(url)
Thread.new do
res = conn.pop
while true
text = ""
while ch = res.content.read(1)
text = text+ch
break if text.end_with? "\r\n"
end
ap text
end
end
while true
end
Above is a working example using ruby, it uses a thread to read data from the connection. So I must miss something to get the data from clojure
I have a requirement to proxy a request in a Rails app. I was hoping I could proxy it with chunking (so, 1 chunk received, one chunk is sent). The app is working fine without chunking (load the request into memory, and transmit).
Here is my code to proxy the chunks through to the end-client:
self.response.headers['Last-Modified'] = Time.now.ctime.to_s
self.response_body = Enumerator.new do |y|
client = HTTPClient.new
http_response = client.get(proxy_url, nil, headers) do |chunk|
y << chunk
end
end
The problem is, I can't inspect "http_response" until all the chunks have been received, thus I can't set the headers based on the headers of the client.
What I'm trying to do is transmit the headers returned from the client before the first chunk is sent. Is this possible?
If not, is this pattern possible in any other Ruby HTTP client gem?
Update
I have a solution for you.
If you call get_async instead, it will retun immediately with an HTTPClient::Connection object that is updated with the header information as soon as it is received. This code sample demonstrates.
The patch to HTTPClient::Connection is almost certainly not necessary for you, but it lets you write things like conn.queue.size? and conn.queue.empty?.
conn.pop blocks until the response (or exception) has been pushed to the queue by the async thread and then returns the normal HTTP::Message object. (Note that, if you are using the monkey patch, you can use conn.queue.empty? to see if pop is going to block.)
resp.content returns an IO object which is a pipe read endpoint, and can be called as soon as pop hs returned. The other end is written by the async thread as the data arrives, and you can read the entire content in one go or in whatever size chunks you like using read.
require 'httpclient'
class HTTPClient::Connection
attr_reader :queue
end
client = HTTPClient.new
conn = client.get_async 'http://en.wikipedia.org/wiki/Ruby_(programming_language)'
resp = conn.pop
resp.header.all.each { |name, val| puts "#{name}=#{val}" }
puts
pipe = resp.content
while chunk = pipe.read(8192)
print chunk
end
You could parse the first chunk you receive to extract the headers, but I suggest you call head first to get the header information. Then do the get as well.
(Updated - the first chunk holds the beginning of the content so this won't work.)