How to Save File using Ruby 2.2.3 and rest-client - ruby

I am trying to use a rest API to download a file, it appears to work, but I dont actually have a file downloaded. I am assuming its because its going to memory, and not to my file system.
Below is the portion of code responsible. My URL is slightly edited when pasting it below, and my authToken is valid.
backup_url = "#{proto}://#{my_host}/applications/ws/migration/export?noaudit=#{include_audit}&includebackup=#{include_backup_zips}&authToken=#{my_token}"
resource = RestClient::Resource.new(
backup_url,
:timeout => nil,
:open_timeout => nil)
response = resource.get
if response.code == 200
puts "Backup Complete"
else
puts "Backup Failed"
abort("Response Code was not 200: Response Code #{response.code}")
end
Returns:
# => 200 OK | application/zip 222094570 bytes
Backup Complete
There is no file present though.
Thanks,

Well you actually have to write to the file yourself.
Pathname('backup.zip').write response.to_s

You can save the zip file using File class
...
if response.code == 200
f = File.new("backup.zip", "wb")
f << response.body
f.close
puts "Backup Complete"
else
...

Related

ruby net/http `read_body': Net::HTTPOK#read_body called twice (IOError)

I'm getting read_body called twice (IOError) using the net/http library. I'm trying to download files and use http sessions efficiently. Looking for some help or advice to fix my issues. From my debug message it appears when I log the response code, readbody=true. Is that why read_body is read twice when I try to write the large file in chunks?
D, [2015-04-12T21:17:46.954928 #24741] DEBUG -- : #<Net::HTTPOK 200 OK readbody=true>
I, [2015-04-12T21:17:46.955060 #24741] INFO -- : file found at http://hidden:8080/job/project/1/maven-repository/repository/org/project/service/1/service-1.zip.md5
/usr/lib/ruby/2.2.0/net/http/response.rb:195:in `read_body': Net::HTTPOK#read_body called twice (IOError)
from ./deploy_application.rb:36:in `block in get_file'
from ./deploy_application.rb:35:in `open'
from ./deploy_application.rb:35:in `get_file'
from ./deploy_application.rb:59:in `block in <main>'
from ./deploy_application.rb:58:in `each'
from ./deploy_application.rb:58:in `<main>'
require 'net/http'
require 'logger'
STAMP = Time.now.utc.to_i
#log = Logger.new(STDOUT)
# project , build, service remove variables above
project = "project"
build = "1"
service = "service"
version = "1"
BASE_URI = URI("http://hidden:8080/job/#{project}/#{build}/maven-repository/repository/org/#{service}/#{version}/")
# file pattern for application is zip / jar. Hopefully the lib in the zipfile is acceptable.
# example for module download /#{service}/#{version}.zip /#{service}/#{version}.zip.md5 /#{service}/#{version}.jar /#{service}/#{version}.jar.md5
def clean_exit(code)
# remove temp files on exit
end
def get_file(file)
puts BASE_URI
uri = URI.join(BASE_URI,file)
#log.debug(uri)
request = Net::HTTP::Get.new uri #.request_uri
#log.debug(request)
response = #http.request request
#log.debug(response)
case response
when Net::HTTPOK
size = 0
progress = 0
total = response.header["Content-Length"].to_i
#log.info("file found at #{uri}")
# need to handle file open error
Dir.mkdir "/tmp/#{STAMP}"
File.open "/tmp/#{STAMP}/#{file}", 'wb' do |io|
response.read_body do |chunk|
size += chunk.size
new_progress = (size * 100) / total
unless new_progress == progress
#log.info("\rDownloading %s (%3d%%) " % [file, new_progress])
end
progress = new_progress
io.write chunk
end
end
when 404
#log.error("maven repository file #{uri} not found")
exit 4
when 500...600
#log.error("error getting #{uri}, server returned #{response.code}")
exit 5
else
#log.error("unknown http response code #{response.code}")
end
end
#http = Net::HTTP.new(BASE_URI.host, BASE_URI.port)
files = [ "#{service}-#{version}.zip.md5", "#{service}-#{version}.jar", "#{service}-#{version}.jar.md5" ].each do |file| #"#{service}-#{version}.zip",
get_file(file)
end
Edit: Revised answer!
Net::HTTP#request, when called without a block, will pre-emptively read the body. The documentation isn't clear about this, but it hints at it by suggesting that the body is not read if a block is passed.
If you want to make the request without reading the body, you'll need to pass a block to the request call, and then read the body from within that. That is, you want something like this:
#http.request request do |response|
# ...
response.read_body do |chunk|
# ...
end
end
This is made clear in the implementation; Response#reading_body will first yield the unread response to a block if given (from #transport_request, which is called from #request), then read the body unconditionally. The block parameter to #request gives you that chance to intercept the response before the body is read.

Net::HTTP get a PDF file and save with paperclip

I would download a PDF File in a web server. I use the Net::HTTP Ruby class.
def open_file(url)
uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
request = Net::HTTP::Get.new(uri.path)
request.basic_auth(self.class.user, self.class.password)
http.request(request)
end
It works, I retrieve my PDF file, it's a string like : %PDF-1.3\n%\ ...
I have a method who return the result :
def file
result = open_file(self.file_url)
times = 0
if result.code == 404 && times <= 5
sleep(1)
times += 1
file
else
result.body
end
end
(It's a recursive method because that possible the file doesn't exist again on the server)
But when I would save this file with Paperclip, I have a error : Paperclip::AdapterRegistry::NoHandlerError (No handler found for "%PDF-1.3\n% ...
I tried manipulate the file with StringIO... without success :(.
Anyone have a idea ?
Assuming the PDF object you're getting is okay (I'm not 100% sure it is), then you could do this:
file = StringIO.new(attachment) #mimic a real upload file
file.class.class_eval { attr_accessor :original_filename, :content_type } #add attr's that paperclip needs
file.original_filename = "your_report.pdf"
file.content_type = "application/pdf"
then save the file with Paperclip.
(from "Save a Prawn PDF as a Paperclip attachment?")

Download a file only if it exists with ruby

I'm doing a scraper to download all the issues of The Exile available at http://exile.ru/archive/list.php?IBLOCK_ID=35&PARAMS=ISSUE.
So far, my code is like this:
require 'rubygems'
require 'open-uri'
DATA_DIR = "exile"
Dir.mkdir(DATA_DIR) unless File.exists?(DATA_DIR)
BASE_exile_URL = "http://exile.ru/docs/pdf/issues/exile"
for number in 120..290
numero = BASE_exile_URL + number.to_s + ".pdf"
puts "Downloading issue #{number}"
open(numero) { |f|
File.open("#{DATA_DIR}/#{number}.pdf",'w') do |file|
file.puts f.read
end
}
end
puts "done"
The thing is, a lot of the issue links are down, and the code creates a PDF for every issue and, if it's empty, it will leave an empty PDF. How can I change the code so that it can only create and copy a file if the link exists?
require 'open-uri'
DATA_DIR = "exile"
Dir.mkdir(DATA_DIR) unless File.exists?(DATA_DIR)
url_template = "http://exile.ru/docs/pdf/issues/exile%d.pdf"
filename_template = "#{DATA_DIR}/%d.pdf"
(120..290).each do |number|
pdf_url = url_template % number
print "Downloading issue #{number}"
# Opening the URL downloads the remote file.
open(pdf_url) do |pdf_in|
if pdf_in.read(4) == '%PDF'
pdf_in.rewind
File.open(filename_template % number,'w') do |pdf_out|
pdf_out.write(pdf_in.read)
end
print " OK\n"
else
print " #{pdf_url} is not a PDF\n"
end
end
end
puts "done"
open(url) downloads the file and provides a handle to a local temp file. A PDF starts with '%PDF'. After reading the first 4 characters, if the file is a PDF, the file pointer has to be put back to the beginning to capture the whole file when writing a local copy.
you can use this code to check if exist the file:
require 'net/http'
def exist_the_pdf?(url_pdf)
url = URI.parse(url_pdf)
Net::HTTP.start(url.host, url.port) do |http|
puts http.request_head(url.path)['content-type'] == 'application/pdf'
end
end
Try this:
require 'rubygems'
require 'open-uri'
DATA_DIR = "exile"
Dir.mkdir(DATA_DIR) unless File.exists?(DATA_DIR)
BASE_exile_URL = "http://exile.ru/docs/pdf/issues/exile"
for number in 120..290
numero = BASE_exile_URL + number.to_s + ".pdf"
open(numero) { |f|
content = f.read
if content.include? "Link is missing"
puts "Issue #{number} doesnt exists"
else
puts "Issue #{number} exists"
File.open("./#{number}.pdf",'w') do |file|
file.write(content)
end
end
}
end
puts "done"
The main thing I added is a check to see if the string "Link is missing". I wanted to do it using HTTP status codes but they always give a 200 back, which is not the best practice.
The thing to note is that with my code you always download the whole site to look for that string, but I don't have any other idea to fix it at the moment.

Ruby URL Validation

I wrote out this script to basically parse a textfile of URL's and return the http response code, however I cant get it to work. I'm able to import and parse the file, however unable to get the return code. Thanks in advance!
require 'net/http'
#Open URL from file
File.open("sample_input_file", "r") do |infile|
while (URI = infile.gets)
end
end
#Get HTTP response code
http = Net::HTTP.new
response = http.request_head(URI)
#Print result
if
response.code != "200"
puts URI + "Error"
else
puts "Ok"
end
.gets returns a string, you need to actually make an a uri by calling for example URI.parse
http://www.ruby-doc.org/stdlib-1.9.3/libdoc/uri/rdoc/

How to store console output for each while loop step in one text file using Ruby $stdout?

Hi I want to store whole console out put generated by all steps of while loop using.
I can store console output in one text file for first step of while loop but after this my loop is terminated.
Code snippet is below:
while line = f.gets do
puts "value: #{line}"
newuri = a.to_s.gsub('fuzz',"#{line}")
print "Attack Request:\n\n#{newuri}\n"
nuri = URI.parse("#{newuri}")
Net::HTTP.start(nuri.host, nuri.port) do |http|
request = Net::HTTP::Get.new nuri.request_uri
response = http.request request
puts "Response"
puts response.body
$stdout = File.new('out.txt','w')
end
end
I must admit that your question is a little unclear. I assume that you have a file with addresses, and you want to have a file with the responses received from these addresses.
Why are you trying to redefine $stdout? Maybe all you need is to puts the text directly to the "out.txt" file?
Try this: (I have removed all "unimportant" details)
File.open('out.txt','w') do |outfile|
while line = f.gets do
puts "value: #{line}" # This is a "debug" info
outfile.puts "value: #{line}" # This goes to the output file
# ...
Net::HTTP.start(nuri.host, nuri.port) do |http|
request = Net::HTTP::Get.new nuri.request_uri
response = http.request request
outfile.puts "Response"
outfile.puts response.body
end
end
end
So, if I understood your problem correctly, you do usually do not need to "redefine" standard output, but you just have to put the data into the correct file. The method puts exists also for opened files, and just "by default" writes to the standard output.
In addition, I suggest using standard error instead standard output if you want some debugging information (not in this snippet of code - just in some other situations where it might be useful):
$stderr.puts "Some warning"
or
STDERR.puts "Some warning"
..whichever looks better for you. Both variable and the constant contain the same object.

Resources