Accessing a 3rd party API JSON object in ruby - ruby

Been messing around with Kickbox's api for email verification. I'm trying to have the program only display the result object in the returned JSON.
Here's the code:
require "kickbox"
require 'httparty'
require 'json'
client = Kickbox::Client.new('ac748asdfwef2fbf0e8177786233a6906cd3dcaa')
kickbox = client.kickbox()
response = kickbox.verify("test#easdfwf.com")
file = File.read(response)
json = JSON.parse(file)
json['result']
I'm getting an error verify.rb:10:in read': no implicit conversion of Kickbox::HttpClient::Response into String (TypeError)
from verify.rb:10:in'
Here's a sample response:
{
"result":"undeliverable",
"reason":"rejected_email",
"role":false,
"free":false,
"disposable":false,
"accept_all":false,
"did_you_mean":"bill.lumbergh#gmail.com",
"sendex":0,
"email":"bill.lumbergh#gamil.com",
"user":"bill.lumbergh",
"domain":"gamil.com",
"success":true,
"message":null
}

You are getting this error:
read': no implicit conversion of Kickbox::HttpClient::Response into String (TypeError)
Because, in this line:
file = File.read(response)
Your response is a Kickbox::HttpClient::Response type object, but the File.read is expecting a String object instead (possibly a file name with path).
I'm not sure what you are trying to do, but this: file = File.read(response) is wrong. You can't do this and that's why you are getting the mentioned error.
If you really want to use file, then you can write the response to a file and then read the response back from the file and use that:
f = File.new('response.txt', 'w+') # creating a file in read/write mode
f.write(response) # writing the response into that file
file_content = File.read('response.txt') # reading the response back from the file
So, the issue is not about Accessing a 3rd party API JSON object in ruby, but you are trying to use File.read in a wrong way.
You can get the response from the API by doing this:
client = Kickbox::Client.new('YOUR_API_KEY')
kickbox = client.kickbox()
response = kickbox.verify("test#easdfwf.com")
Then, you can play with the response e.g. can do a puts response.inspect or puts response.body.inspect and see what's inside that object.
And, from there you can extract your required outputs only.

Related

Ruby Post returns 404 URL Not found while curl works fine

I'm trying to write some Ruby code to update GitLab CI/CD variables using the REST endpoint update variable. When I perform a curl with the same path, the same private token, and the same --form data it updates the variable as expected. When I use the Ruby code that I put together based on reading stackoverflow and the net::http docs, it fails with a 404 URL not found.
I can use a similar piece of code to create a new CI/CD variable successfully. I can also delete an existing variable, and re-create it, but it I would like to know the mistake I am making in the update call.
Can someone point out what I did wrong?
#!/usr/bin/env ruby
require 'net/http'
require 'uri'
token = File.read(__dir__ + '/.gitlab-token').chomp
host = 'https://gitlab.com/'
variables_path = 'api/v4/projects/123456/variables'
env_var = 'MY_VAR'
update_uri = URI(host + variables_path + '/' + env_var)
# I've written the above this way because my actual code
# has a delete and create in order to "update" the variable
response = Net::HTTP.start(update_uri.host, update_uri.port, use_ssl: true) do |http|
update_request = Net::HTTP::Post.new(update_uri)
update_request['PRIVATE-TOKEN'] = token
form_data = [
['value', 'a new value']
]
update_request.set_form(form_data, 'multipart/form-data')
response = http.request(update_request)
response.body
end

Proper way to upload a doc to FSCrawler for indexing in Elasticsearch

I'm prototyping a Rails application to upload documents to FSCrawler (running the REST interface), to incorporate into an Elasticsearch index. Using their example, this works:
response = `curl -F "file=##{params[:document][:upload].tempfile.path}" "http://127.0.0.1:8080/fscrawler/_upload?debug=true"`
The file gets uploaded, and the content gets indexed. This is an example of what I get:
"{\n \"ok\" : true,\n \"filename\" : \"RackMultipart20200130-91061-16swulg.pdf\",\n \"url\" : \"http://127.0.0.1:9200/local/_doc/d661edecf3e28572676e97a6f0d1d\",\n \"doc\" : {\n \"content\" : \"\\n \\n \\n\\nBasically, what you need to know is that Dante is all IP-based, and makes use of common IT standards. Each Dante device behaves \\n\\nmuch like any other network device you would already find on your network. \\n\\nIn order to make integration into an existing network easy, here are some of the things that Dante does: \\n\\n▪ Dante...
When I run curl at the command line, I get EVERYTHING, like the "filename" being properly set. If I use it as above, in the Rails controller, as you can see, the filename is set to the Tempfile's filename. That's not a workable solution. Trying to use params[:document][:upload].tempfile (without .path) or just params[:document][:upload] both fail entirely.
I'm trying to do this "the right way," but every incarnation of using a proper HTTP client to do this fails. I can't figure out how to invoke an HTTP POST that will submit a file to FSCrawler the way curl (on the command line) does it.
In this example, I'm just trying to send the file by using the Tempfile file object. For some reason, FSCrawler gives me the error in the comment, and get a little metadata, but no content is indexed:
## Failed to extract [100000] characters of text for ...
## org.apache.tika.exception.ZeroByteFileException: InputStream must have > 0 bytes
uri = URI("http://127.0.0.1:8080/fscrawler/_upload?debug=true")
request = Net::HTTP::Post.new(uri)
form_data = [['file', params[:document][:upload].tempfile,
{ filename: params[:document][:upload].original_filename,
content_type: params[:document][:upload].content_type }]]
request.set_form form_data, 'multipart/form-data'
response = Net::HTTP.start(uri.hostname, uri.port) do |http|
http.request(request)
end
If I change the above to use params[:document][:upload].tempfile.path, then I don't get the error about the InputStream, but I also (still) do not get any content indexed. This is an example of what I get:
{"_index":"local","_type":"_doc","_id":"72c9ecf2a83440994eb87d28786e6","_version":3,"_seq_no":26,"_primary_term":1,"found":true,"_source":{"content":"/var/folders/bn/pcc1h8p16tl534pw__fdz2sw0000gn/T/RackMultipart20200130-91061-134tcxn.pdf\n","meta":{},"file":{"extension":"pdf","content_type":"text/plain; charset=ISO-8859-1","indexing_date":"2020-01-30T15:33:45.481+0000","filename":"Similarity in Postgres and Rails using Trigrams · pganalyze.pdf"},"path":{"virtual":"Similarity in Postgres and Rails using Trigrams · pganalyze.pdf","real":"Similarity in Postgres and Rails using Trigrams · pganalyze.pdf"}}}
If I try to use RestClient, and I try send the file by referencing the actual path to the Tempfile, then I get this error message, and I get nothing:
## Unsupported media type
response = RestClient.post 'http://127.0.0.1:8080/fscrawler/_upload?debug=true',
file: params[:document][:upload].tempfile.path,
content_type: params[:document][:upload].content_type
If I try to .read() the file, and submit that, then I break the FSCrawler form:
## Internal server error
request = RestClient::Request.new(
:method => :post,
:url => 'http://127.0.0.1:8080/fscrawler/_upload?debug=true',
:payload => {
:multipart => true,
:file => File.read(params[:document][:upload].tempfile),
:content_type => params[:document][:upload].content_type
})
response = request.execute
Obviously, I've been trying this every way I can, but I can't replicate whatever curl is doing with any known Ruby-based HTTP clients. I'm utterly lost as to how to get Ruby to submit data to FSCrawler in a way that will get the document contents indexed properly. I've been at this far longer than I care to admit. What am I missing here?
I finally tried Faraday, and, based on this answer, came up with the following:
connection = Faraday.new('http://127.0.0.1:8080') do |f|
f.request :multipart
f.request :url_encoded
f.adapter :net_http
end
file = Faraday::UploadIO.new(
params[:document][:upload].tempfile.path,
params[:document][:upload].content_type,
params[:document][:upload].original_filename
)
payload = { :file => file }
response = connection.post('/fscrawler/_upload', payload)
Using Fiddler helped me to see the results of my attempts, as I got closer and closer to the curl request. This snippet posts the request almost exactly as curl does. To route this call through the proxy, I just needed to add , proxy: 'http://localhost:8866' to the end of the connection setup.

How to get the downloaded xlsx file from the api endpoint in karate?

I have an endpoint that downloads an xlsx file. In my test, I need to check the content of the file (not comparing the file with another file, but reading the content and checking). I am using karate framework for testing and I am trying to use apache POI for working with the excel sheet. However, the response I get from karate when calling the download endpoint is a String. For creating an excel file with POI I need an InputStream or the path to the actual file. I have tried the conversion, but it does not work.
I guess I am missing some connection here, or maybe the conversion is bad, I am new to karate and to the whole thing.
I appreciate any help, thanks!
Given url baseUrl
Given path downloadURI
When method GET
Then status 200
And match header Content-disposition contains 'attachment'
And match header Content-disposition contains 'example.xlsx'
And match header Content-Type == 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
* def value= FileChecker.createExcelFile(response)
* print value
And the Java code:
public static String createExcelFile(String excel) throws IOException, InvalidFormatException {
InputStream stream = IOUtils.toInputStream(excel, Charset.forName("UTF-8"));
Workbook workbook = WorkbookFactory.create(stream);
return ("Workbook has " + workbook.getNumberOfSheets() + " Sheets : ");
}
When running the scenario, I get the following error:
javascript evaluation failed: FileChecker.createExcelFile(response), java.io.IOException: Failed to read zip entry source
When testing the same endpoint in Postman, I am getting a valid excelsheet.
In Karate 0.9.X onwards you have a responseBytes variable which will be raw bytes, which may be what you need.
* def value = FileChecker.createExcelFile(responseBytes)
And you can change your method signature to be:
public static String createExcelFile(byte[] excel) {}
You should be easily able to convert a byte array to an InputStream and take it from there.
P.S. just saying that it "works in Postman" is not helpful :P
TO download zip file from Karate tests as binary bite array
Scenario: To verify and get the ADCI Uri from deployment
Given url basicURL + DeployUri +ArtifactUri
And headers {authorization:'#(authToken)',accept:'application/json',tenant:'#(tenantUUId)',Content-Type:'application/zip'}
When method get
Then status 200
And def responsebytes = responseBytes

Ruby httpclient: 'create_request': undefined method 'each'

I'm green when it comes to Ruby. Right now I'm mucking about with a script which connects to the Terremark eCloud API Explorer. I'm trying to use the httpclient gem, but I'm a bit confused as to how I'm supposed to construct my client.
#!/usr/bin/ruby
require "httpclient"
require 'base64'
require 'hmac-sha1'
require 'openssl'
# Method definitions
def get_date
# Get the time and date in the necessary format
result = Time.now.strftime('%a, %d %b %Y %H:%M:%S GMT')
end
def get_signature(action,date,headers,resource,user,pass)
string_to_sign = "#{action}
#{date}
#{headers}
#{resource}\n"
return Base64.encode64(OpenSSL::HMAC.digest('sha1', "#{user}:#{pass}", "#{string_to_sign}"))
end
# Initial variables
date = get_date
domain = "https://services.enterprisecloud.terremark.com"
password = 'password'
query = {}
tmrk_headers = Hash.new
tmrk_headers['x-tmrk-date: '] = date
tmrk_headers['x-tmrk-version: '] = '2013-06-01'
uri = '/cloudapi/spec/networks/environments/1'
url = "#{domain}#{uri}"
username = 'user#terremark.com'
verb = 'GET'
signature = get_signature(verb,date,tmrk_headers,uri,username,password)
tmrk_headers['Authorization: '] = "Basic \"#{signature}\""
puts signature
client = HTTPClient.new
client.get_content(url,query,tmrk_headers)
EDIT: This is no longer valid as I've moved beyond this error with some help:
Right now I'm not concerned about seeing what is returned from the connection. I'm just looking to create an error-free run. For instance, if I run the script without the client.get_content line it will return to a prompt without issue (giving me the impression that everything ran cleanly, if not uselessly).
How am I supposed to construct this? The httpclient documentation uses the example with external headers:
extheader = [['Accept', 'image/jpeg'], ['Accept', 'image/png']]
clnt.get_content(uri, query, extheader)
I'm making the assumption that the query is the URI that I've defined.
In all reality, it isn't set up right in the first place. I need to be able to include the string in the auth_header variable in the string to be signed but the signature is actually part of the variable. I've obviously created a hole in that regard.
Any assistance with this will be more than appreciated.
EDIT2: Removed strace pastebin. Adding Ruby backtrace:
/home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:1023:in `create_request': undefined method `each' for #<String:0x0000000207d1e8> (NoMethodError)
from /home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:884:in `do_request'
from /home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:959:in `follow_redirect'
from /home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:594:in `get_content'
from ./test.rb:42:in `<main>'
EDIT3: Updated script; adding further backtrace after making necessary script modifications:
/
home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:975:in `success_content': unexpected response: #<HTTP::Message::Headers:0x00000001dddc58 #http_version="1.1", #body_size=0, #chunked=false, #request_method="GET", #request_uri=#<URI::HTTPS:0x00000001ddecc0 URL:https://services.enterprisecloud.terremark.com/cloudapi/spec/networks/environments/1>, #request_query={}, #request_absolute_uri=nil, #status_code=400, #reason_phrase="Bad Request", #body_type=nil, #body_charset=nil, #body_date=nil, #body_encoding=#<Encoding:US-ASCII>, #is_request=false, #header_item=[["Content-Type", "text/html; charset=us-ascii"], ["Server", "Microsoft-HTTPAPI/2.0"], ["Date", "Thu, 27 Mar 2014 23:12:53 GMT"], ["Connection", "close"], ["Content-Length", "339"]], #dumped=false> (HTTPClient::BadResponseError)
from /home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:594:in `get_content'
from ./test.rb:52:in `<main>'
The issue that you're having as stated by your backtrace
/home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:1023:in `create_request': undefined method `each' for #<String:0x0000000207d1e8> (NoMethodError)
from /home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:884:in `do_request'
from /home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:959:in `follow_redirect'
from /home/msnyder/.rvm/gems/ruby-2.1.1/gems/httpclient-2.3.4.1/lib/httpclient.rb:594:in `get_content'
from ./test.rb:42:in `<main>'
is that it seems like you're passing a String object to one of the arguments in get_content where it expects an object that responds to the method each.
From looking at the documentation of httpclient#get_content http://www.ruby-doc.org/gems/docs/h/httpclient-xaop-2.1.6/HTTPClient.html#method-i-get_content
It expects the second parameter to be a Hash or Array of arguments
From your code sample and showing only the relevant parts
uri = '/cloudapi/spec/networks/environments/1'
url = "https://services.enterprisecloud.terremark.com"
tmrk_headers = "x-tmrk-date:\"#{date}\"\nx-tmrk-version:2014-01-01"
auth_header = "Authorization: CloudApi AccessKey=\"#{access_key}\" SignatureType=\"HmacSHA1\" Signature=\"#{signature}\""
full_header = "#{tmrk_headers}\n#{auth_header}"
client = HTTPClient.new
client.get_content(url,uri,full_header)
There are two things that I see wrong with your code.
You're passing in a String value for the query. Specifically, you're passing in uri which has a value of what I'm assuming is the path that you want to hit.
For the extra headers parameter, you're passing in a String value which is in the full_header
What you need to do in order to fix this is pass in the full url for the first parameter.
This means it should look something like this:
url = "https://services.enterprisecloud.terremark.com/cloudapi/spec/networks/environments/1"
query = {} # if you have any parameters to pass in they should be here.
headers = {
"x-tmrk-date" => date, "x-tmrk-version" => "2014-01-01",
"Authorization" => "CloudApi AccessKey=#{access_key} SignatureType=HmacSHA1 Signature=#{signature}"
}
client = HTTPClient.new
client.get_content(url, query, headers)

I am trying to use Curl::Easy.http_put but have some issues with the data argument

I'm struggling with a ruby script to upload some pictures to moodstocks using their http interface
here is the code that I have so far
curb = Curl::Easy.new
curb.http_auth_types = :digest
curb.username = MS_API
curb.password = MS_SECRET
curb.multipart_form_post = true
Dir.foreach(images_directory) do |image|
if image.include? '.jpg'
path = images_directory + image
filename = File.basename(path, File.extname(path))
puts "Upload #{path} with id #{filename}"
raw_url = 'http://api.moodstocks.com/v2/ref/' + filename
encoded_url = URI.parse URI.encode raw_url
curb.url = encoded_url
curb.http_put(Curl::PostField.file('image_file', path))
end
end
and this is the error that I get
/Library/Ruby/Gems/2.0.0/gems/curb-0.8.5/lib/curl/easy.rb:57:in `add': no implicit conversion of nil into String (TypeError)
from /Library/Ruby/Gems/2.0.0/gems/curb-0.8.5/lib/curl/easy.rb:57:in `perform'
from upload_moodstocks.rb:37:in `http_put'
from upload_moodstocks.rb:37:in `block in <main>'
from upload_moodstocks.rb:22:in `foreach'
from upload_moodstocks.rb:22:in `<main>'
I think the problem is in how I give the argument to the http_put method, but I have tried to look for some examples of Curl::Easy.http_put and have found nothing so far.
Could anyone point me to some documentation regarding it or help me out on this.
Thank you in advance
There are several problems here:
1. URI::HTTP instead of String
First, the TypeError you encounter comes from the fact that you pass a URI::HTTP instance (encoded_url) as curb.url instead of a plain Ruby string.
You may want to use encoded_url.to_s, but the question is why do you do this parse/encode here?
2. PUT w/ multipart/form-data
The second problem is related to curb. At the time of writing (v0.8.5) curb does NOT support the ability to perform a HTTP PUT request with multipart/form-data encoding.
If you refer to the source code you can see that:
the multipart_form_post setting is only used for POST requests,
the put_data setter does not support Curl::PostField-s
To solve your problem you need an HTTP client library that can combine Digest Authentication, multipart/form-data and HTTP PUT.
In Ruby you can use rufus-verbs, but you will need to use rest-client to build the multipart body.
There is also HTTParty but it has issues with Digest Auth.
That is why I greatly recommend to go ahead with Python and use Requests:
import requests
from requests.auth import HTTPDigestAuth
import os
MS_API_KEY = "kEy"
MS_API_SECRET = "s3cr3t"
filename = "sample.jpg"
with open(filename, "r") as f:
base = os.path.basename(filename)
uid = os.path.splitext(base)[0]
r = requests.put(
"http://api.moodstocks.com/v2/ref/%s" % uid,
auth = HTTPDigestAuth(MS_API_KEY, MS_API_SECRET),
files = {"image_file": (base, f.read())}
)
print(r.status_code)

Resources