How to get response from API after POST - ruby

I am interacting with an API that sometimes will return a reference number for certain functions. How would i go about displaying this reference number. At present the API returns for example this string (some code taken out for security reasons).
if (AppSettings.GetLandingPageReceipts())
{
long returnValue = loanLP.AddWithdrawalToBankID(CustomerID, SavingsAccountID, BankAccID, Amount);
return string.Concat("Requested Withdrawal Ref No: ", returnValue);
}
ReceiptWithdrawal withdraw = new ReceiptWithdrawal(CustomerID, SavingsAccountID, Amount, session, pp, BankAccID);
ReceiptTRN TRN = new ReceiptTRN();
TRN.Post(withdraw);
return JsonConvert.SerializeObject(withdraw.ReceiptNo);
In the Ruby code I use this for Post calls
def post_call(routePath, params)
started_at = Time.now
logger.bench 'SERVICE - POST', started_at, routePath
uri = URI.parse("#{settings.abacus_service_endpoint}#{routePath}")
http = Net::HTTP.new(uri.host, uri.port)
http.set_debug_output $stderr
req = Net::HTTP::Post.new(uri.request_uri,get_header_for_abacus_service)
req.form_data = params
resp = http.request(req)
if response_ok?(resp)
if #errors
puts '======= post request form_data ======='
puts params
puts '==================='
end
return resp
end
end
I can see the following in my console when working locally so I know its being passed. Its just accessing it is the problem.
opening connection to 10.10.10.27...
opened
<- "POST /Accounts/WithdrawalToBank HTTP/1.1\r\nAuthorization: Token 433\r\nX-Customer-Id: 433\r\nX-Customer-Pin: 8EFD155E7C829421E16F14D367568C4179C4548320CD7E1B6AD6E9A485F2AF092FA42D0F645085F3DBBA5AEC2434720FC76E407E41443C3F5EDAFB958793254A\r\nX-Ip-Address: 10.0.2.2\r\nAccept: */*\r\nUser-Agent: Ruby\r\nContent-Type: application/x-www-form-urlencoded\r\nConnection: close\r\nHost: 10.10.10.27:3579\r\nContent-Length: 81\r\n\r\n"
<- "CustomerID=433&SavingsAccountID=10922&BankAccountID=2450&Amount=10.00&Reference=a"
-> "HTTP/1.1 200 OK\r\n"
-> "Transfer-Encoding: chunked\r\n"
-> "Content-Type: text/html\r\n"
-> "Server: Microsoft-HTTPAPI/2.0\r\n"
-> "Date: Mon, 23 Nov 2015 11:03:02 GMT\r\n"
-> "Connection: close\r\n"
-> "\r\n"
-> "22\r\n"
reading 34 bytes...
-> "Requested Withdrawal Ref No: 10059"
read 34 bytes
reading 2 bytes...
-> "\r\n"
read 2 bytes
-> "0\r\n"
-> "\r\n"
Conn close
If any further information is needed please let me know.

If the API is returning a body you should be able to get this from your response with resp.body

Related

How can I properly read the sequence of bytes from a hyper::client::Request and print it to the console as a UTF-8 string?

I am exploring Rust and trying to make a simple HTTP request (using the hyper crate) and print the response body to the console. The response implements std::io::Read. Reading various documentation sources and basic tutorials, I have arrived at the following code, which I compile & execute using RUST_BACKTRACE=1 cargo run:
use hyper::client::Client;
use std::io::Read;
pub fn print_html(url: &str) {
let client = Client::new();
let req = client.get(url).send();
match req {
Ok(mut res) => {
println!("{}", res.status);
let mut body = String::new();
match res.read_to_string(&mut body) {
Ok(body) => println!("{:?}", body),
Err(why) => panic!("String conversion failure: {:?}", why)
}
},
Err(why) => panic!("{:?}", why)
}
}
Expected:
A nice, human-readable HTML content of the body, as delivered by the HTTP server, is printed to the console.
Actual:
200 OK
thread '<main>' panicked at 'String conversion failure: Error { repr: Custom(Custom { kind: InvalidData, error: StringError("stream did not contain valid UTF-8") }) }', src/printer.rs:16
stack backtrace:
1: 0x109e1faeb - std::sys::backtrace::tracing::imp::write::h3800f45f421043b8
2: 0x109e21565 - std::panicking::default_hook::_$u7b$$u7b$closure$u7d$$u7d$::h0ef6c8db532f55dc
3: 0x109e2119e - std::panicking::default_hook::hf3839060ccbb8764
4: 0x109e177f7 - std::panicking::rust_panic_with_hook::h5dd7da6bb3d06020
5: 0x109e21b26 - std::panicking::begin_panic::h9bf160aee246b9f6
6: 0x109e18248 - std::panicking::begin_panic_fmt::haf08a9a70a097ee1
7: 0x109d54378 - libplayground::printer::print_html::hff00c339aa28fde4
8: 0x109d53d76 - playground::main::h0b7387c23270ba52
9: 0x109e20d8d - std::panicking::try::call::hbbf4746cba890ca7
10: 0x109e23fcb - __rust_try
11: 0x109e23f65 - __rust_maybe_catch_panic
12: 0x109e20bb1 - std::rt::lang_start::hbcefdc316c2fbd45
13: 0x109d53da9 - main
error: Process didn't exit successfully: `target/debug/playground` (exit code: 101)
Thoughts
Since I received 200 OK from the server, I believe I have received a valid response from the server (I can also empirically prove this by doing the same request in a more familiar programming language). Therefore, the error must be caused by me incorrectly converting the byte sequence into an UTF-8 string.
Alternatives
I also attempted the following solution, which gets me to a point where I can print the bytes to the console as a series of hex strings, but I know that this is fundamentally wrong because a UTF-8 character can have 1-4 bytes. Therefore, attempting to convert individual bytes into UTF-8 characters in this example will work only for a very limited (255, to be exact) subset of UTF-8 characters.
use hyper::client::Client;
use std::io::Read;
pub fn print_html(url: &str) {
let client = Client::new();
let req = client.get(url).send();
match req {
Ok(res) => {
println!("{}", res.status);
for byte in res.bytes() {
print!("{:x}", byte.unwrap());
}
},
Err(why) => panic!("{:?}", why)
}
}
We can confirm with the iconv command that the data returned from http://www.google.com is not valid UTF-8:
$ wget http://google.com -O page.html
$ iconv -f utf-8 page.html > /dev/null
iconv: illegal input sequence at position 5591
For some other urls (like http://www.reddit.com) the code works fine.
If we assume that the most part of the data is valid UTF-8, we can use String::from_utf8_lossy to workaround the problem:
pub fn print_html(url: &str) {
let client = Client::new();
let req = client.get(url).send();
match req {
Ok(mut res) => {
println!("{}", res.status);
let mut body = Vec::new();
match res.read_to_end(&mut body) {
Ok(_) => println!("{:?}", String::from_utf8_lossy(&*body)),
Err(why) => panic!("String conversion failure: {:?}", why),
}
}
Err(why) => panic!("{:?}", why),
}
}
Note that that Read::read_to_string and Read::read_to_end return Ok with the number of read bytes on success, not the read data.
If you actually look at the headers that Google returns:
HTTP/1.1 200 OK
Date: Fri, 22 Jul 2016 20:45:54 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: NID=82=YwAD4Rj09u6gUA8OtQH73BUz6UlNdeRc9Z_iGjyaDqFdRGMdslypu1zsSDWQ4xRJFyEn9-UtR7U6G7HKehoyxvy9HItnDlg8iLsxzlhNcg01luW3_-HWs3l9S3dmHIVh; expires=Sat, 21-Jan-2017 20:45:54 GMT; path=/; domain=.google.ca; HttpOnly
Alternate-Protocol: 443:quic
Alt-Svc: quic=":443"; ma=2592000; v="36,35,34,33,32,31,30,29,28,27,26,25"
Accept-Ranges: none
Vary: Accept-Encoding
Transfer-Encoding: chunked
You can see
Content-Type: text/html; charset=ISO-8859-1
Additionally
Therefore, the error must be caused by me incorrectly converting the byte sequence into an UTF-8 string.
There is no conversion to UTF-8 happening. read_to_string simply ensures that the data is UTF-8.
Simply put, assuming that an arbitrary HTML page is encoded in UTF-8 is completely incorrect. At best, you have to parse the headers to find the encoding and then convert the data. This is complicated because there's no real definition for what encoding the headers are in.
Once you have found the correct encoding, you can use a crate such as encoding to properly transform the result into UTF-8, if the result is even text! Remember that HTTP can return binary files such as images.

Method missing when running FSharp.Data app

I have a mono/.Net 4.5 app that compiles just fine. But whe I run it I get a Method missing Http.Request. The code in question is this
let private post url parser body =
let res = Http.Request (
url,
body = (body |> TextRequest),
silentHttpErrors = true,
headers = [
Accept HttpContentTypes.Json
ContentType HttpContentTypes.Json
]
)
let body =
match res.Body with
HttpResponseBody.Text str -> str
| _ -> failwith "Only text replies are supported"
if res.StatusCode >= 200 && res.StatusCode < 300 then
body |> parser
else
body |> errorParser
It doesn't seem to be related with the actual method because all method calls from FSharp.Data seems to fail.
I'm experiencing this both when running some standard nunit tests or when executing.
I would seem that the issue was that I had FSharp.Data.TypeProviders installed in the GAC. removing that
gacutil -u FSharp.Data.TypeProviders
solved it

F#: Breaking out of a loop

I am new to programming and F# is my first language.
I have a list of URLs that, when first accessed, either returned HTTP error 404 or experienced gateway timeout. For these URLs, I would like to try accessing them another 3 times. At the end of these 3 attempts, if a WebException error is still thrown, I will assume that the URL doesn't exist, and I will add it to a text file containing all of the invalid URLs.
Here is my code:
let tryAccessingAgain (url: string) (numAttempts: int) =
async {
for attempt = 1 to numAttempts do
try
let! html = fetchHtmlAsync url
let name = getNameFromPage html
let id = getIdFromUrl url
let newTextFile = File.Create(htmlDirectory + "\\" + id.ToString("00000") + " " + name.TrimEnd([|' '|]) + ".html")
use file = new StreamWriter(newTextFile)
file.Write(html)
file.Close()
with
:? System.Net.WebException -> File.AppendAllText("G:\User\Invalid URLs.txt", url + "\n")
}
I have tested fetchHtmlAsync, getNameFromPage and getIdFromUrl in F# Interactive. All of them work fine.
If I succeed in downloading the HTML contents of a URL without using all 3 attempts, obviously I want to break out of the for-loop immediately. My question is: How may I do so?
use recursion instead of the loop:
let rec tryAccessingAgain (url: string) (numAttempts: int) =
async {
if numAttempts > 0 then
try
let! html = fetchHtmlAsync url
let name = getNameFromPage html
let id = getIdFromUrl url
let newTextFile = File.Create(htmlDirectory + "\\" + id.ToString("00000") + " " + name.TrimEnd([|' '|]) + ".html")
use file = new StreamWriter(newTextFile)
file.Write(html)
file.Close()
with
| :? System.Net.WebException ->
File.AppendAllText("G:\User\Invalid URLs.txt", url + "\n")
return! tryAccessingAgain url (numAttempts-1)
}
please note that I could not test it and there might be some syntax errors - sorry if
as we are at it - you might want to rewrite the logging of the invalid url like this:
let rec tryAccessingAgain (url: string) (numAttempts: int) =
async {
if numAttempts <= 0 then
File.AppendAllText("G:\User\Invalid URLs.txt", url + "\n")
else
try
let! html = fetchHtmlAsync url
let name = getNameFromPage html
let id = getIdFromUrl url
let newTextFile = File.Create(htmlDirectory + "\\" + id.ToString("00000") + " " + name.TrimEnd([|' '|]) + ".html")
use file = new StreamWriter(newTextFile)
file.Write(html)
file.Close()
with
| :? System.Net.WebException ->
return! tryAccessingAgain url (numAttempts-1)
}
this way it will only be logged once the attempts where all made

Why the gmail API sends html emails as plain text?

I'm trying to send a html email using the gmail API but for some reasons it randomly sends the email as plain/text. It seems that Google alters the content type header I set. Is there any reason for that? The email content is exactly same all the time (as I test it). Is the API still experimental?
Sometimes when it works it also adds Content-Type: multipart/alternative; (although I never set it).
The encoding process looks as below. The code is Go but I guess it self explanatory and the process is language agnostic.
header := make(map[string]string)
header["From"] = em.From.String()
header["To"] = em.To.String()
// header["Subject"] = encodeRFC2047(em.Subject)
header["Subject"] = em.Subject
header["MIME-Version"] = "1.0"
header["Content-Type"] = "text/html; charset=\"utf-8\""
// header["Content-Transfer-Encoding"] = "base64"
header["Content-Transfer-Encoding"] = "quoted-printable"
var msg string
for k, v := range header {
msg += fmt.Sprintf("%s: %s\r\n", k, v)
}
msg += "\r\n" + em.Message
gmsg := gmail.Message{
Raw: encodeWeb64String([]byte(msg)),
}
_, err = gmailService.Users.Messages.Send("me", &gmsg).Do()
Hmm, are you sure it's not a bug in your program? Can you print out the entire string and paste it here?
I just used the Gmail API to send an email like:
To: <redacted>
Subject: test html email 2015-01-14 09:45:40
Content-type: text/html
<html><body><b>hello</b>world</body></html>
and it looked as expected by the recipient's end in Gmail. Well, actually looks like it got wrapped it in a multipart/alternative and added a text/plain part as well (good thing IMO):
<random trace headers>
MIME-Version: 1.0
From: <redacted>
Date: Wed, 14 Jan 2015 09:46:41 -0800
Message-ID:
Subject: test html email 2015-01-14 09:45:40
To: <redacted>
Content-Type: multipart/alternative; boundary=089e0141a9a2875c38050ca05201
--089e0141a9a2875c38050ca05201
Content-Type: text/plain; charset=UTF-8
*hello*world
--089e0141a9a2875c38050ca05201
Content-Type: text/html; charset=UTF-8
<html><body><b>hello</b>world</body></html>
--089e0141a9a2875c38050ca05201--
In any case, it's doing some parsing/sanitizing but does allow sending text/html email.

Two sequential HTTP requests

Sorry for newbie's question (and for my english) :)
I tries to write the following function:
the function downloads a content from URL1 (it's received as argument)
the function parses this content and extract URL2
the function downloads a content from URL2
the content from URL2 is a result of this function
if an error was occured, this function should return Nothing
I know how to execute the HTTP requests. I have a function to parse the request from URL1. But I don't know how:
to execute new request with extracted URL2
to ignore second request if URL2 isn't extracted (or error in URL1 is occured)
I principle you want something like this:
import Maybe
import Http
type Url = String
getContentFromUrl : Maybe Url -> Maybe String
getContentFromUrl url = --your implementation
extractUrlFromContent : Maybe String -> Maybe Url
extractUrlFromContent content = --your implementation
content = getContentFromUrl (Just "http://example.com")
|> extractUrlFromContent
|> getContentFromUrl
Sending an Http means talking to the outside world, which involves Signals in Elm. So the final result from URL2 will come packed in a Signal. As long as you're ok with that, you can use maybe to return the content of in a Maybe in a Signal. For example:
import Maybe
import Http
-- helper functions
isSuccess : Http.Response a -> Bool
isSuccess response = case response of
Http.Success _ -> True
_ -> False
responseToMaybe : Http.Response a -> Maybe.Maybe a
responseToMaybe response = case response of
Http.Success a -> Just a
_ -> Nothing
parseContentAndExtractUrl : String -> String
parseContentAndExtractUrl = identity -- this still requires your implementation
-- URL1
startUrl : String
startUrl = "www.example.com" -- replace this with your URL1
result1 : Signal (Http.Response String)
result1 = Http.sendGet <| constant startUrl
-- URL2
secondUrl : Signal String
secondUrl = result1
|> keepIf isSuccess (Http.Success "")
|> lift (\(Http.Success s) -> s)
|> lift parseContentAndExtractUrl
result2 : Signal (Maybe String)
result2 = secondUrl
|> Http.sendGet
|> lift responseToMaybe
Note that there are plans to make all of this easier to work with: https://groups.google.com/d/topic/elm-discuss/BI0D2b-9Fig/discussion

Resources