I am looking to download a PDF directly from a blob URL using Ruby code. The URL appears like this:
blob:https://dev.myapp.com/ba853441-d1f7-4595-9227-1b0e445b188b
I am able to visit the link in a web browser and have the PDF appear in a new tab. On inspection, other than the GET request there are some request headers related to browser/user agent.
I've attempted to use OpenURI but it detects the url as not an HTTP URI. Open URI works just fine with files from URLs that look like https://.../invoice.pdf
I've also tried to go the JS route with this snippet but this is returning 0 for me, as others have also reported.
Any automated solutions that require a download onClick and then navigating the local disk is not sufficient for my project. I am looking to retrieve files directly from the URL in the same fashion that OpenURI works for a file on a server. Thanks in advance.
I was able to get the Javascript snippet to work. The piece that I was missing was that the blob URL needed to be opened/visited in the browser first (in this case, Chrome). Here's a code snippet that might work for others.
def get_file_content_in_base64(uri)
result = page.evaluate_async_script("
var uri = arguments[0];
var callback = arguments[1];
var toBase64 = function(buffer){for(var r,n=new Uint8Array(buffer),t=n.length,a=new Uint8Array(4*Math.ceil(t/3)),i=new Uint8Array(64),o=0,c=0;64>c;++c)i[c]='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'.charCodeAt(c);for(c=0;t-t%3>c;c+=3,o+=4)r=n[c]<<16|n[c+1]<<8|n[c+2],a[o]=i[r>>18],a[o+1]=i[r>>12&63],a[o+2]=i[r>>6&63],a[o+3]=i[63&r];return t%3===1?(r=n[t-1],a[o]=i[r>>2],a[o+1]=i[r<<4&63],a[o+2]=61,a[o+3]=61):t%3===2&&(r=(n[t-2]<<8)+n[t-1],a[o]=i[r>>10],a[o+1]=i[r>>4&63],a[o+2]=i[r<<2&63],a[o+3]=61),new TextDecoder('ascii').decode(a)};
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer';
xhr.onload = function(){ callback(toBase64(xhr.response)) };
xhr.onerror = function(){ callback(xhr.status) };
xhr.open('GET', uri);
xhr.send();
", uri)
if result.is_a? Integer
fail 'Request failed with status %s' % result
end
return result
end
def get_pdf_from_blob
yield # Yield to whatever Click actions that trigger the file download
sleep 3 # Wait for direct download to complete
visit 'chrome://downloads'
sleep 3
file_name = page.text.split("\n")[3]
blob_url = page.text.split("\n")[4]
visit blob_url
sleep 3 # Wait for PDF to load
base64_str = get_file_content_in_base64(blob_url)
decoded_content = Base64.decode64(base64_str)
file_path = "./tmp/#{file_name}"
File.open(file_path, "wb") do |f|
f.write(decoded_content)
end
return file_path
end
From here you can send file_path to S3, send to PDF Reader, etc.
Related
I'm trying to write some Ruby code to update GitLab CI/CD variables using the REST endpoint update variable. When I perform a curl with the same path, the same private token, and the same --form data it updates the variable as expected. When I use the Ruby code that I put together based on reading stackoverflow and the net::http docs, it fails with a 404 URL not found.
I can use a similar piece of code to create a new CI/CD variable successfully. I can also delete an existing variable, and re-create it, but it I would like to know the mistake I am making in the update call.
Can someone point out what I did wrong?
#!/usr/bin/env ruby
require 'net/http'
require 'uri'
token = File.read(__dir__ + '/.gitlab-token').chomp
host = 'https://gitlab.com/'
variables_path = 'api/v4/projects/123456/variables'
env_var = 'MY_VAR'
update_uri = URI(host + variables_path + '/' + env_var)
# I've written the above this way because my actual code
# has a delete and create in order to "update" the variable
response = Net::HTTP.start(update_uri.host, update_uri.port, use_ssl: true) do |http|
update_request = Net::HTTP::Post.new(update_uri)
update_request['PRIVATE-TOKEN'] = token
form_data = [
['value', 'a new value']
]
update_request.set_form(form_data, 'multipart/form-data')
response = http.request(update_request)
response.body
end
I have been trying for days to pull down activity data from the Withings API using the OAuth Ruby gem. Regardless of what method I try I consistently get back a 503 error response (not enough params) even though I copied the example URI from the documentation, having of course swapped out the userid. Has anybody had any luck with this in the past. I hope it is just something stupid I am doing.
class Withings
API_KEY = 'REMOVED'
API_SECRET = 'REMOVED'
CONFIGURATION = { site: 'https://oauth.withings.com', request_token_path: '/account/request_token',
access_token_path: '/account/access_token', authorize_path: '/account/authorize' }
before do
#consumer = OAuth::Consumer.new API_KEY, API_SECRET, CONFIGURATION
#base_url ||= "#{request.env['rack.url_scheme']}://#{request.env['HTTP_HOST']}#{request.env['SCRIPT_NAME']}"
end
get '/' do
#request_token = #consumer.get_request_token oauth_callback: "#{#base_url}/access_token"
session[:token] = #request_token.token
session[:secret] = #request_token.secret
redirect #request_token.authorize_url
end
get '/access_token' do
#request_token = OAuth::RequestToken.new #consumer, session[:token], session[:secret]
#access_token = #request_token.get_access_token oauth_verifier: params[:oauth_verifier]
session[:token] = #access_token.token
session[:secret] = #access_token.secret
session[:userid] = params[:userid]
redirect "#{#base_url}/activity"
end
get '/activity' do
#access_token = OAuth::AccessToken.new #consumer, session[:token], session[:secret]
response = #access_token.get("http://wbsapi.withings.net/v2/measure?action=getactivity&userid=#{session[:userid]}&startdateymd=2014-01-01&enddateymd=2014-05-09")
JSON.parse(response.body)
end
end
For other API endpoints I get an error response of 247 - The userid provided is absent, or incorrect. This is really frustrating. Thanks
So I figured out the answer after copious amount of Googleing and grasping a better understanding of both the Withings API and the OAuth library I was using. Basically Withings uses query strings to pass in API parameters. I though I was going about passing these parameters correctly when I was making API calls, but apparently I needed to explicitly set the OAuth library to use the query string scheme, like so
http_method: :get, scheme: :query_string
This is appended to my OAuth consumer configuration and all worked fine immediately.
I am trying to access the session cookie within a spider. I first login to a social network using in a spider:
def parse(self, response):
return [FormRequest.from_response(response,
formname='login_form',
formdata={'email': '...', 'pass':'...'},
callback=self.after_login)]
In after_login, I would like to access the session cookies, in order to pass them to another module (selenium here) to further process the page with an authentificated session.
I would like something like that:
def after_login(self, response):
# process response
.....
# access the cookies of that session to access another URL in the
# same domain with the autehnticated session.
# Something like:
session_cookies = XXX.get_session_cookies()
data = another_function(url,cookies)
Unfortunately, response.cookies does not return the session cookies.
How can I get the session cookies ? I was looking at the cookies middleware: scrapy.contrib.downloadermiddleware.cookies and scrapy.http.cookies but there doesn't seem to be any straightforward way to access the session cookies.
Some more details here bout my original question:
Unfortunately, I used your idea but I dind't see the cookies, although I know for sure that they exists since the scrapy.contrib.downloadermiddleware.cookies middleware does print out the cookies! These are exactly the cookies that I want to grab.
So here is what I am doing:
The after_login(self,response) method receives the response variable after proper authentication, and then I access an URL with the session data:
def after_login(self, response):
# testing to see if I can get the session cookies
cookieJar = response.meta.setdefault('cookie_jar', CookieJar())
cookieJar.extract_cookies(response, response.request)
cookies_test = cookieJar._cookies
print "cookies - test:",cookies_test
# URL access with authenticated session
url = "http://site.org/?id=XXXX"
request = Request(url=url,callback=self.get_pict)
return [request]
As the output below shows, there are indeed cookies, but I fail to capture them with cookieJar:
cookies - test: {}
2012-01-02 22:44:39-0800 [myspider] DEBUG: Sending cookies to: <GET http://www.facebook.com/profile.php?id=529907453>
Cookie: xxx=3..........; yyy=34.............; zzz=.................; uuu=44..........
So I would like to get a dictionary containing the keys xxx, yyy etc with the corresponding values.
Thanks :)
A classic example is having a login server, which provides a new session id after a successful login. This new session id should be used with another request.
Here is the code picked up from source which seems to work for me.
print 'cookie from login', response.headers.getlist('Set-Cookie')[0].split(";")[0].split("=")[1]
Code:
def check_logged(self, response):
tmpCookie = response.headers.getlist('Set-Cookie')[0].split(";")[0].split("=")[1]
print 'cookie from login', response.headers.getlist('Set-Cookie')[0].split(";")[0].split("=")[1]
cookieHolder=dict(SESSION_ID=tmpCookie)
#print response.body
if "my name" in response.body:
yield Request(url="<<new url for another server>>",
cookies=cookieHolder,
callback=self."<<another function here>>")
else:
print "login failed"
return
Maybe this is an overkill, but i don't know how are you going to use those cookies, so it might be useful (an excerpt from real code - adapt it to your case):
from scrapy.http.cookies import CookieJar
class MySpider(BaseSpider):
def parse(self, response):
cookieJar = response.meta.setdefault('cookie_jar', CookieJar())
cookieJar.extract_cookies(response, response.request)
request = Request(nextPageLink, callback = self.parse2,
meta = {'dont_merge_cookies': True, 'cookie_jar': cookieJar})
cookieJar.add_cookie_header(request) # apply Set-Cookie ourselves
CookieJar has some useful methods.
If you still don't see the cookies - maybe they are not there?
UPDATE:
Looking at CookiesMiddleware code:
class CookiesMiddleware(object):
def _debug_cookie(self, request, spider):
if self.debug:
cl = request.headers.getlist('Cookie')
if cl:
msg = "Sending cookies to: %s" % request + os.linesep
msg += os.linesep.join("Cookie: %s" % c for c in cl)
log.msg(msg, spider=spider, level=log.DEBUG)
So, try request.headers.getlist('Cookie')
This works for me
response.request.headers.get('Cookie')
It seems to return all the cookies that where introduced by the middleware in the request, session's or otherwise.
As of 2021 (Scrapy 2.5.1), this is still not particularly straightforward. But you can access downloader middlewares (like CookiesMiddleware) from within a spider via self.crawler.engine.downloader:
def after_login(self, response):
downloader_middlewares = self.crawler.engine.downloader.middleware.middlewares
cookies_mw = next(iter(mw for mw in downloader_middlewares if isinstance(mw, CookiesMiddleware)))
jar = cookies_mw.jars[response.meta.get('cookiejar')].jar
cookies_list = [vars(cookie) for domain in jar._cookies.values() for path in domain.values() for cookie in path.values()]
# or
cookies_dict = {cookie.name: cookie.value for domain in jar._cookies.values() for path in domain.values() for cookie in path.values()}
...
Both output formats above can be passed to other requests using the cookies parameter.
Ruby's Net:HTTP needs to be given a full URL in order for it to connect to the server and get the file properly. By "full URL" I mean a URL including the http:// part and the trailing slash if it needs it. For instance, Net:HTTP won't connect to a URL looking like this: example.com, but will connect just fine to http://example.com/. Is there any way to make sure a URL is a full URL, and add the required parts if it isn't?
EDIT: Here is the code I am using:
parsed_url = URI.parse(url)
req = Net::HTTP::Get.new(parsed_url.path)
res = Net::HTTP.start(parsed_url.host, parsed_url.port) {|http|
http.request(req)
}
If this is only doing what the sample code shows, Open-URI would be an easier approach.
require 'open-uri'
res = open(url).read
This would do a simple check for http/https:
if !(url =~ /^https?:/i)
url = "http://" + url
end
This could be a more general one to handle multiple protocols (ftp, etc.)
if !(url =~ /^\w:/i)
url = "http://" + url
end
In order to make sure parsed_url.path gives you a proper value (it should be / when no specific path was provided), you could do something like this:
req = Net::HTTP::Get.new(parsed_url.path.empty? ? '/' : parsed_url.path)
I am doing a video crawler in ruby. In there I have to log in to a page by enabling cookies and download pages. For that I am using the CURL library in ruby. I can successfully log in, but I can't download the pages inside that with curl. How can I fix this or download the pages otherwise?
My code is
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_post(1st url,field)
curl.perform
curl = Curl::Easy.perform(2nd url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_get
code = curl.body_str
What I've seen in writing my own similar "post-then-get" script is that ruby/Curb (I'm using version 0.7.15 with ruby 1.8) seems to ignore the cookiejar/cookiefile fields of a Curl::Easy object. If I set either of those fields and the http_post completes successfully, no cookiejar or cookiefile file is created. Also, curl.cookies will still be nil after your curl.http_post, however, the cookies ARE set within the curl object. I promise :)
I think where you're going wrong is here:
curl = Curl::Easy.perform(2nd url)
The curb documentation states that this creates a new object. That new object doesn't have any of your existing cookies set. If you change your code to look like the following, I believe it should work. I've also removed the curl.perform for the first url since curl.http_post already implicitly does the "perform". You were basically http_post'ing twice before trying your http_get.
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.http_post(1st url,field)
curl.url = 2nd url
curl.http_get
code = curl.body_str
If this still doesn't seem to be working for you, you can verify if the cookie is getting set by adding
curl.verbose = true
Before
curl.http_post
Your Curl::Easy object will dump all the headers that it gets in the response from the server to $stdout, and somewhere in there you should see a line stating that it added/set a cookie. I don't have any example output right now but I'll try to post a follow-up soon.
HTTPClient automatically enables cookies, as does Mechanize.
From the HTTPClient docs:
clnt = HTTPClient.new
clnt.get_content(url1) # receives Cookies.
clnt.get_content(url2) # sends Cookies if needed.
Posting a form is easy too:
body = { 'keyword' => 'ruby', 'lang' => 'en' }
res = clnt.post(uri, body)
Mechanize makes this sort of thing really simple (It will handle storing the cookies, among other things).