How do I convert a curl command with output option to httparty? - ruby

I am trying to convert this:
curl -k -v -X GET -H "Accept: application/pdf" https://username:password#rest.click2mail.com/v1/mailingBuilders/456/proofs/1 -o myProof
for httparty. Here's my code:
#auth = {:username => 'test', :password => 'test'}
options = {:headers => {'Accept' => 'application/pdf'}, :basic_auth => #auth }
body = HTTMultiParty.get("https://stage.rest.click2mail.com/v1/mailingBuilders/54544/proofs/1", options)
File.open("myProof", "w") do |file|
file.write body
end
p "Reponse #{body.parsed_response}"
the response returns
"Cannot convert urn:c2m:document:id:361 from text/plain to application/pdf"
Edit (2)
body.inspect with "text/plain" returns
#<HTTParty::Response:0x8 #parsed_response=nil, #response=#<Net::HTTPNotAcceptable 406 Not Acceptable readbody=true>, #headers={\"date\"=>[\"Sun, 06 May 2012 11:22:12 GMT\"], \"server\"=>[\"Jetty(6.1.x)\"], \"content-length\"=>[\"0\"], \"connection\"=>[\"close\"], \"content-type\"=>[\"text/plain; charset=UTF-8\"]}>
with "application/pdf"
#<HTTParty::Response:0x7fce08a92260 #parsed_response=\"Cannot convert urn:c2m:document:id:361 from text/plain to application/pdf\", #response=#<Net::HTTPBadRequest 400 Bad Request readbody=true>, #headers={\"date\"=>[\"Sun, 06 May 2012 11:24:09 GMT\"], \"server\"=>[\"Jetty(6.1.x)\"], \"content-type\"=>[\"application/pdf\"], \"connection\"=>[\"close\"], \"transfer-encoding\"=>[\"chunked\"]}>
Edit 3
Api : Step 8
https://developers.click2mail.com/rest-api#send-a-test-mailing
Edit 4
with debug_ouput option
with "application/pdf"
opening connection to stage.rest.click2mail.com...
opened
<- "GET /v1/mailingBuilders/54544/proofs/1 HTTP/1.1\r\nAccept: application/pdf\r\nAuthorization: Basic Ym9sb2RldjptVW43Mjk0eQ==\r\nConnection: close\r\nHost: stage.rest.click2mail.com\r\n\r\n"
-> "HTTP/1.1 400 Bad Request\r\n"
-> "Date: Sun, 06 May 2012 14:05:30 GMT\r\n"
-> "Server: Jetty(6.1.x)\r\n"
-> "Content-Type: application/pdf\r\n"
-> "Connection: close\r\n"
-> "Transfer-Encoding: chunked\r\n"
-> "\r\n"
-> "49\r\n"
reading 73 bytes...
-> ""
-> "Cannot convert urn:c2m:document:id:361 from text/plain to application/pdf"
read 73 bytes
reading 2 bytes...
-> ""
-> "\r\n"
read 2 bytes
-> "0\r\n"
-> "\r\n"
Conn close
with "text/plain"
opening connection to stage.rest.click2mail.com...
opened
<- "GET /v1/mailingBuilders/54544/proofs/1 HTTP/1.1\r\nAccept: text/plain\r\nAuthorization: Basic Ym9sb2RldjptVW43Mjk0eQ==\r\nConnection: close\r\nHost: stage.rest.click2mail.com\r\n\r\n"
-> "HTTP/1.1 406 Not Acceptable\r\n"
-> "Date: Sun, 06 May 2012 14:14:19 GMT\r\n"
-> "Server: Jetty(6.1.x)\r\n"
-> "Content-Length: 0\r\n"
-> "Connection: close\r\n"
-> "Content-Type: text/plain; charset=UTF-8\r\n"
-> "\r\n"
reading 0 bytes...
-> ""
read 0 bytes
Conn close
log from curl command
Edit (4)
Well i found a solution with Rest Client and I did my modest contribution with this gem.
https://github.com/bolom/click2mail-ruby-gem
Thanks Every body

You can also use net::http (require 'net/http')
See this question for an example how to download large files.

Try this:
body = Httparty.get("https://username:password#rest.click2mail.com/v1/mailingBuilders/456/proofs/1")
File.open("myProof", "w") do |file|
file.write body
end

The problem is with the API it's self.
It has nothing to do with how you are calling the API to get the proof or what Rest API library you are using. The problem is whatever you used to create this mailingBuilders is causing a problem which is resulting in the error message "Cannot convert urn:c2m:document:id:361 from text/plain to application/pdf".
Please send support#click2mail.com exactly what you have done to create this mailingBuilder so we can review it and see what the problem is.

Related

karate framework graphql error on standalone [duplicate]

This question already has an answer here:
Unable to use read('classpath:') when running tests with standalone karate.jar
(1 answer)
Closed 1 year ago.
I have a simple graphql which works well in the maven build but getting error when executed as a feature file with the standalone karate jar.
Here is the graphql used in request
getCustomerById.graphqls
-----------------------
query{
getCustomerById(custid: "12345"){
custid
firstname
lastname
address1_text
address2_text
city_text
state_text
zip_text
}
}
-------------------------
#Feature file
graphql.feature
* configure ssl = { keyStore: 'classpath:customer/test.pfx', keyStorePassword: 'test123', keyStoreType: 'pkcs12' }
Given url 'https://<IT_URL>/graphql-data/v1/graphql'
* def customerRequest = read('getCustomerById.graphqls')
And def variables = { customerid: '123456'}
And request { query: '#(query)', variables: '#(variables)' }
When method post
Then status 200
* print 'Response ==>', response
getting the following error
======
18:31:42.699 [main] WARN com.intuit.karate.JsonUtils - object to json serialization failure, trying alternate approach: [B cannot be cast to [Ljava.lang.Object;
18:31:42.701 [main] DEBUG com.intuit.karate - request:
2 > POST https://<ITURL>/graphql-data/v1/graphql
2 > Content-Type: application/json; charset=UTF-8
2 > Content-Length: 62
2 > Host: it-xxx-dns.com
2 > Connection: Keep-Alive
2 > User-Agent: Apache-HttpClient/4.5.13 (Java/1.8.0_291)
2 > Accept-Encoding: gzip,deflate
{"variables":{"customerId":"792798178595168"},"query":"[B#7ca8d498"}
18:31:43.060 [main] DEBUG com.intuit.karate - response time in milliseconds: 357
2 < 200
2 < Date: Wed, 30 Jun 2021 23:31:42 GMT
2 < Content-Type: application/json;charset=UTF-8
2 < Content-Length: 109
2 < Connection: keep-alive
2 < Access-Control-Allow-Origin: *
2 < Access-Control-Allow-Methods: *
2 < Access-Control-Max-Age: 3600
2 < Access-Control-Allow-Headers: authorization, content-type, xsrf-token
2 < Access-Control-Expose-Headers: xsrf-token
2 < Vary: Origin
2 < Vary: Access-Control-Request-Method
2 < Vary: Access-Control-Request-Headers
2 < Strict-Transport-Security: max-age=15724800; includeSubDomains
2 < Set-Cookie: INGRESSCOOKIE=1625095903.954.332.448531; Domain=it-i3-xxx-dns.com; Secure
{"errors":[{"description":"Invalid Syntax : offending token '[' at line 1 column 1","error_code":"400-900"}]}
18:31:43.061 [main] INFO com.intuit.karate - [print] Response ==> {
"errors": [
{
"description": "Invalid Syntax : offending token '[' at line 1 column 1",
"error_code": "400-900"
}
]
}
======
Can you please let me know what's wrong with the code. Is it because of the SSL and passing the pfx file it behaves differently in the standalone jar . I'm not able to find out but it works perfectly fine in the maven build
Yes in stand-alone mode it is preferred you use file: instead of classpath: unless you know how to properly set the class-path.
Please read this for more info and try to figure this out: https://stackoverflow.com/a/58398958/143475

processingFailure error (400) while retrieving CommentThreads list

I a trying to retrieve all the comments of a video via Python iteration/paging. I am logged correctly with a developer key
import googleapiclient.discovery as gg
import googleapiclient.errors as gge
yt = gg.build(api_service_name= 'youtube', api_version= 'v3', developerKey = M_KEY)
comments= []
page= ''
while True:
request = yt.commentThreads().list(
part= "snippet,replies",
order= "relevance",
maxResults= 100,
pageToken= page,
textFormat= "plainText",
videoId= video['id']
# video is a static dictionary i've saved outside the script
)
try:
response = request.execute()
page= response['nextPageToken']
comments.extend(response['items'])
print('Comments extended')
except KeyError:
# there are no more pages
print('Iteration ended')
break
except gge.HttpError as error:
print('HTTP error:', error.__dict__['resp']['status'])
What i'm expecting it to do is iterate the pages of comments until the response['nextPageToken'] throws a KeyError, meaning that there are no more pages of comments. Instead, what happens is that the execution goes flawlessly for a dozen of iteration (at best) then it starts to throw said processingFailure error which content looks like this:
{
"error": {
"errors": [
{
"domain": "youtube.commentThread",
"reason": "processingFailure",
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the <code>commentThread</code> resource in the request body to ensure that it is valid.",
"locationType": "other",
"location": "body"
}
],
"code": 400,
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the <code>commentThread</code> resource in the request body to ensure that it is valid."
}
}
I have tried to log both the page and the videoId to ensure nothing went wrong with them but they're both valid. I've also tried to time.sleep() for up to 15 minutes when that error occurs but nothing changes.
This is the request in json format at the time of the error, catched using request.to_json(), thanks to #stvar for suggesting it:
{
"uri": "https://www.googleapis.com/youtube/v3/commentThreads",
"method": "POST",
"body": "part=snippet%2Creplies&order=relevance&maxResults=100&pageToken=QURTSl9pM2xlemsyWjAzYlBhWkNRMTFMMWIyMjFsVVNnS2U0WE8zTkwxRzdRdkpsdlVqWHlwWEV2SmxkeUxjdkR1UWk3eVU1OTI1cmJEeUtJZHRGQWVmY21PUGxVOXBER3YtckE2NlhSWlRwQzR0Y2VyY0JDbC1uNVRaSU56RklzejJERmRCc2lLUjV2Rm1LV0Njc3ByMjliRXRMZmNjRFJucFgwNFNBYVhkSFJzYXkzNVpKTXNNSzNfWmVGd3dSRWxYQmwyQmxnWGJwNFZidVpiYjlOWjBabFFsalZFZkdqZFV0SHlrVEJqclppVnRtMjZCTnYtQm9WWjFrQ0dELUlLTnYwWG50cU5BQXJ3Ukh3VE9PZnNaZ0tZaWN1ZTdBakJkWFp5Ymo5M2R5Y0g1aWVsWUUzUVg0TU83Q2JZQ1IxWnRTMXUyTFhpSDdmMU9GTmtiQUE0UjdyVUVBelNnSjhTTDVsLU1TaERwVHdvSVhkX1ktNVBTc2xkX09zcjBOT3E3Z2lVWWRPRFhkVF9NN1JaQTEyUEJmU1hNbUtvM2JzU1NzOFRid29wTEo3Q0hucmJnNHcwNUJzaGtqSE8wa2g2U0FUY3pQbDJ1bGNnaFRKNEJCRm90TVNyWXNSREgyQVFqMU9PNnY3elBGSEhrYXJSMUJYS09yQ0tOVE5Oa3l5V00tdGY3TTlwY3o0VXJsaWRua1BrNXVhWmVLMzV1T2NmOEhqaUNucEdheTRfZjNiM1JkYUJuaGZqQjFMV1c0NFRJNXlzR2trdFpLemV1SU12V0tCTW11b1RMU05PXzA0eVdHM3lRclpZaC1BN3k4RDdhTW1uTHZtbDVsRzBVTVFHdkdkMTN4VHQzNW1tZ3BoY0F6VDJVWTFhUWpxdW8td1M0bnkzQTRtVGc5bGxQNV81ejV1dm1JX1ZDRVZIXzI3eXVnbHJBcC1Lb0NULUhHOGp3ZGNGeVFKbFNXbVh0Y2NQei1UbjBFLVhuZWp0eXh5NzVjOEtjS1FqTUppQWdOSDRmWWtSOUZPRHQtSEpsTnJtNWZVX2t4VDlVTDB6WmxWTHN5dlZzZllNQkFBOEJNMWZkOEtoTk5jMnQ4Y0hydXVScTNILXZLXzJodGFUNmxhQnEtay1PVV9yYzJFNEhKaDZjcUszV3ZGM2VLaUxJZjlwRmViYXRfVGRSOFZ6OF9vU2h6WjVqNkhVU0tqZHduLTNlaFhuTHFXSG1WSk1HUVQ4dkdIdDZvUFdKNkxOeFlhTmJzd1J4dGQzLXBHUmsxaHYtdFc2cTI1VDZsMWJGdE5Pb1RmR2hlRGM3cjZPcDJ2eTljQk1GcTJXaTFtNFhndzlWbDBOby1kNzZhLU5WNTI1VUlzRmpQSkRvSlFFMUQzNllzbi0tU01OYTg1a2poS2ZrWHpQMjQyd1hDb3h4blE5ZlJmN2xIMEstRFR6cUFWcTNDRDFfbjNubXY4Z2ZseGdVY2NjTWk0NzQ3SDFZcWs2eWxZWlB0Vl9iSldlbktOMjVFWUp6UnVRb3dfOXFQdmhBZEN5clJpX1g4aVhmdERnbE5XX1FjVWNXODRtSm1LSUpDVnJHVGlEeUtGb3BPMVYyWU5TbnQzY29NLUY5c3Y2WmpNVTNlVjIxQ2RwSzlKTUZwY0RxY2FlMGFtd2tucFpjeUtDN2xwOERYcDJwSU1RY0dIdXdCTmJIcWdjbTh1Q04wVTh1dktzeVdob09wX25uU19BMFNlRlBrNG1wZFRKVVJFVzVfdGQxbGFYemFqZjJOQTd1R0NCZ0RrMWlTS3BMMy1hY0FMd25KTGFYelJPQjZvRnlYMnBFelhCREgyRDJ3TnJWNldWUllqOVVvdHV2cVRXRXlBbkJpaFJpd3RIc2RaamVaUERldXItT1pkTVVFczBzNi1hZmhDYTFzWVM3SElsYkxtMkoybC03YlZVRkt1NEVSWV8tWHRJTko4d2hqWllWVU04UXlkQV84ZjFzVm01bW83cTd4R3ZOSVNabGRSaXVlTU91MXR6RVFYeTNwNHd3bzNVY1RncHdzY1VKQWw2eWNvcmdER0N5RjZiQkRmNnh0S256MzhreldFTm9XMDhlY1VUeEhnNTM2bHNYVlpKdGJrdHd1Y3VCc0hYOHlEc1EyZXJLTUlMTlVQb0FmU0hpdy1WdS1iT19fTTlMQUVWa3BnWloxSXdUZHotMW5zWWVnTVBzelE3VmQtRlBOajNfcmJJNnlZYkpDdmxKWXoxcjBZYkV4Q2duNGx1MTlrYWVVOXktT0lVX2dfVnc2cW9nSEVHSHZCUzFHRFFPaW1ydUxlY012bVQtaVBnTXQ1VWxOZVI3YW9nTkhFdHlwUGlneFdOM2Jkcm1iWEcxN25pQVI0TUpvVW1hemlrYWl6M0dnSTQ0VWhVWVMxaHEzeS05cnJRSkJ6TEVEZTB4anYySzR6WWhyMEtkZ0ZVMkxDZXlkcHFCSTBfU2Z3ZEVTYkE1YlE1SXI0M3lhUnJGZ1l0QVZFU2ZqRDE1MUhSLU1lU2dxWFpUem04RHVqOVBTUkZhbkFLWUJ2aGZsX0w2SzBabTBoUUxiYVZxT25ydk56U01YdklJZmtPemxtT0Nrc1JGOWVRQnMydk5lZkRjRExEUzRaWXFfTlNRdVNOUTFacFdLcklqRXc1NDg2eGU2NXlid3Y0SnRVeWplVE5CVVF4Qm01ajJfOWY2U2NWcWlVajYwYXJ3eXZ6RGt4cldCMndES0wzZ0xERl91bmlQaDVtUmNXSERXekJCVDAtd2ZnVFBadERnQlJWUEl3cWxCb3FfOEh2NkJCUlZqUThCMUk0OXM3Sjk3Sng3WFBpdUlEUFRnLV9kMnhoa2Z5QVpLRFNLSWl0ck1WUnhKRWFaZ0J0VDZmTjY3MG5SMkZQYUx0YTQ3dmgzYzhpa0Nua0dIS0VzSGYzOERiYkN4ZXM1ZkpkYV9nMEJrMnA2aHgycWFfZ04ySmRGellBaUphdXA2X213WXNxVW1QbUpfa2xZZTUwbVA2azMyRjV0Q2dRcWJVajFuVTFjRHB5QUZUcTZ1X3RwNGVBSmVoNWt6ajFZNTkwS1J2TzZreHhfQ1EzTTdDUGpCb0V1SXdFWmh6YjF0Q1NHUnoxTy11MURZak03ZnNEX0NCSUxPbkVuZGZ5VDlCMkkwNl9lLUw2bk05MUVfU29NTmg4WFBSSGNibnZ4T3h4T25Yal81a3NDMG5veDBERVdLRVBEV0pqRi1CbEpnV01HajhjR29jbXFEdXpIcFdCblcyZ2dsX1ZUNHJuMkxHUFV6ekZiaklFMXpob0w4MXJNOXVhZ2dMX1oxdzNkRi1sV3JDaGpING10b21qYTI0QW00dWFOOVZHSWJlT3lZVy1qSzNnUGxPU0hTZnFTS2VVLW80cHFqWTAtYV9lbHI5WHdnZV9nM25oX2Iwd2lUeXgxQndDcENrczUxb3RlZlZkSnREU20xSjdCM0VBZkZ4X1pkQ2YwQUszcWJxRVdwM25wM0w2WmpNTG1WM0RyWlU3TkpGeElBeGRkUTZlWEttazJuLW1mdFZtMTNVamRKZEthMkFfbS1HSnRtMFRTbFlVYnBqQ2puaUVrY2xieTB0TDF4UDNUWGNMUFo3enFtajB5T0dZbGRYQkxDQzlxdGNXN0d4V093RXUxd2Z4ek9oVEZzVUpabHpxeHNFVlVUbF9ORGk5U1BhVHg3QlJEWjdqM2g1VlFhb184SU1ZMERFLWJnY0htaXk2aXFoTjBMaUg2eTdjVmhHekFCbmdjNXJ2WkpQVE9jY09RRXE1SVJYenhvdVJxVGRDbnd5WGdrZnlyWG96T2FXcVRVRXBiWHBGRGFnZTUybmFJam1HanpPelNSZHBJLW5yaDgybm5BdldNNTRtcGNuaXRGbzJmODhyc2IzYmNUdDNoekNaRG5Oa2k1OUNXU2tuSnd2OG1GVjBGM2xid0lWd3QwWDR3Ukc4TlZZcXdkY2FOdS0xaWlsajAwQTBLVnJTVzljLS1WZF9WWGl0Y21naDdpX2c1djJwRFpuU3pWY1VNRWNOTy1GaVdSTUxDTHZ6VDFNYl9HRmsxVEZlTG5fVFdrNzRYdlpoSVRwMHJjSjdJT21lU09ObEtqTlNiX0FCNGtXRldpSHByY0htSGxOYS1JbWZWbWRCZUlLaFhxUG5haGZCMm1PbklDMGFJS0pmZ2RjUUtVSWpLeVBrTk5DMXo5VUVkeGZKRFRtZDh4OHJkV3BEbi0wUnJXY2x1Rk9XU04xeXpkbnA5U1FnV2huOTFYVlBGRG5Rem9CODdfRUU2d3liNy1LQ0JHbjFsVVRQc3pMS3JudjhDVG1Bb2xjYU1MaUprajZGT2dkS0ktd3IzNXZMSWhFQm9oamdGZW5KcWNOQ0Q5NDZGWXFzTE5peEVyenJHWG9ZaWtNRjVoUXJNVFVxdjhSYUgyRHZKcXpmOWtMRHJmV1dMRklLUTFvLVJZbDY5dzRFcXJxUGJoenF4SFpLYUFoOWpFcnNNWVZmMmsycy1YVXllMkhwNzJrOGl5eUpTTXdQVFhWTjJ4MVA1OC1FbnVlbGZSRDQwOWxkcUwwSkRWTFVHMUdhUi1jVnlBZVJKZmZNY3Z2OTQ4MVdqWDV5WFBvSjJJY25VZDdOc2JyUkRKQ1IybXY2NG5uX3NJUmJubXFvTHQ3dHVoV0pMQmRlb2tQOUU2cU1xN3gzVnFsMzVqVDhjSFNuSmNYZWczSG1veEhGWDlkTDNBRmtXVFhBaVQwSDR2dTJBaU1nRUJVZndMS3gzUjNXNlRRcGJqNnJkbldzT3FjdU1NSkwtOVZ2WEIzMEYtQlFFNW84dUthQjFvZFg4OVota255cWt5Mlo1cXpoWTJKel9vTmpLbHRlQVN0WkVURDh3WG8wSTY3OE0yTTF4OF9KQ3ctc1Uzd3hxejdyRlB0NzVXZkxKU1RvOUw1TWtEMjNCaWVaLTkzaExwTHlwaG9DaC1qcjE4cHgxRHRtMmc3TnZXaTJleHNHVW1WTzJlMkVlWFd6QUxfOENSUEJRU0E5d08xa2hjSkxBSG5kY1BlTzJxUkR0U3ppTG0zR0xTOTRLUU9YSmxGSWFoR0JjRmp4b1pUTmNCTFoxaTdBd0RpYmhJLVJyX19qOFkyLW5UUW4wYjBsbTFpa3MzSThqd3pFa0Q0aGZrVWFBWXhMdS1LaUg0R0xRNlB4YUdxb0xVdEtaMm1ld0ZVbi1uMTlGLVN5bDhNakN3czlnQ2ZYZHhFS2hlQ1ByZzNhNHljdFhiOHpJdDg3ZVdyeGFBVW1jdEkxbnJMMUw2WGpQcUFDc0N6WWQtMzhCNTZ0VWtXWnBlRmRIWnl5VkpkeF9XMzZnZG5MWnpoVFNRcGJhTjAxOUNPWkpZeTh6Zk9QQW9paU5IOXVESHgybkRHeUREY1ZCMFBIaVZ2aFN5dFNuWWJZbWJKX1N5dG9LUXl6LVViMzFIWHZURkVVVU1iTl9NdUNiTEwwem9CQ0EtODNyMGJLaGh1bGpXRkFBRWxXR1dTYlNIa3NEenN1NlBid2gxZTUyUFNhem5yVWN4Y0tF&textFormat=plainText&videoId=CJ_GCPaKywg&key=m_developer_key&alt=json",
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "google-api-python-client/1.7.9 (gzip)",
"content-length": "5730",
"x-http-method-override": "GET",
"content-type": "application/x-www-form-urlencoded"
},
"methodId": "youtube.commentThreads.list",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
NOTE: I need to have order= "relevance" in my request because I primarly need the most voted comments.
An answer is nowhere to be found, I hope you can help me
Issue is, we can't really retrieve all the comments of every video.
https://issuetracker.google.com/issues/134912604
We currently don't support paging through the whole stream. So there's no way to retrieve all the 1000+ commentThreads that you have for that video
This is not a solution to your problem. It just shows that querying the endpoint via a GET
request method succeeds obtaining from the API the needed page response.
# comments-wget [-d] VIDEO_ID [PAGE_TOKEN]
$ comments-wget() {
local x='eval'
[ "$1" == '-d' ] && {
x='echo'
shift
}
local v="$1"
quote2 -i v
local p="$2"
quote2 -i p
local O="/tmp/$v-comments%d.json"
local o
local k=0
while :; do
printf -v o "$O" "$k"
[ ! -f "$o" ] && break
(( k++ ))
done
quote o
k="$APP_KEY"
quote2 -i k
local a="$AGENT"
quote2 a
local c="\
wget \
--debug \
--verbose \
--no-check-certif \
--output-document=$o \
--user-agent=$a \
'https://www.googleapis.com/youtube/v3/commentThreads?key=$k&videoId=$v&part=replies,snippet&order=relevance&maxResults=100&textFormat=plainText&alt=json${p:+&pageToken=$p}'"
$x "$c"
}
$ PAGE_TOKEN=...
$ AGENT=... APP_KEY=... comments-wget CJ_GCPaKywg "$PAGE_TOKEN"
Setting --verbose (verbose) to 1
Setting --check-certificate (checkcertificate) to 0
Setting --output-document (outputdocument) to /tmp/CJ_GCPaKywg-comments0.json
Setting --user-agent (useragent) to ...
DEBUG output created by Wget 1.14 on linux-gnu.
--2019-06-10 17:41:11-- https://www.googleapis.com/youtube/v3/commentThreads?...
Resolving www.googleapis.com... 172.217.19.106, 216.58.214.202, 216.58.214.234, ...
Caching www.googleapis.com => 172.217.19.106 216.58.214.202 216.58.214.234 172.217.16.106 172.217.20.10 2a00:1450:400d:808::200a
Connecting to www.googleapis.com|172.217.19.106|:443... connected.
Created socket 5.
Releasing 0x0000000000ae57c0 (new refcount 1).
---request begin---
GET /youtube/v3/commentThreads?.../1.1
User-Agent: ...
Accept: */*
Host: www.googleapis.com
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 200 OK
Expires: Mon, 10 Jun 2019 14:43:39 GMT
Date: Mon, 10 Jun 2019 14:43:39 GMT
Cache-Control: private, max-age=0, must-revalidate, no-transform
ETag: "XpPGQXPnxQJhLgs6enD_n8JR4Qk/OUAqOrEpA9YYqmVx0wqn9en_OrE"
Vary: Origin
Vary: X-Origin
Content-Type: application/json; charset=UTF-8
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Content-Length: 205965
Server: GSE
Alt-Svc: quic=":443"; ma=2592000; v="46,44,43,39"
---response end---
200 OK
Registered socket 5 for persistent reuse.
Length: 205965 (201K) [application/json]
Saving to: ‘/tmp/CJ_GCPaKywg-comments0.json’
100%[==========================================>] 205,965 580KB/s in 0.3s
2019-06-10 17:41:18 (580 KB/s) - ‘/tmp/CJ_GCPaKywg-comments0.json’ saved [205965/205965]
Note that the shell functions quote and quote2 above are those from youtube-data.sh (they are not really needed). $PAGE_TOKEN is extracted from the body string of the JSON request object posted above.
The next question is: why your python code uses a POST request method?
Could it be that this is the cause of your problem?
According to Google's Python Client Library sample code and to Google's Youtube API sample code, you should have been coding your pagination loop as shown below:
request = yt.commentThreads().list(...)
while request:
response = request.execute()
# your processing code goes here ...
request = yt.commentThreads().list_next(request, response)

How do I set multiple cookies in a single Webrick response?

I use Webrick to test my HTTP client and I need to test how it gets and sets cookies.
Wikipedia provides an example of such response:
HTTP/1.0 200 OK
Content-type: text/html
Set-Cookie: theme=light
Set-Cookie: sessionToken=abc123; Expires=Wed, 09 Jun 2021 10:18:14 GMT
...
but if I do
server.mount_proc ?/ do |req, res|
res["set-cookie"] = %w{ 1=2 2=3 }
the whole array becomes a single cookie: "[\"1=2\", \"2=3\"]"
And then in WEBrick::HTTPResponse source code I see again the #header = Hash.new that probably means you can't repeat the header key.
Is it impossible?!
UPD:
This leaves me no hope:
https://github.com/rack/rack/issues/52#issuecomment-399629
https://github.com/rack/rack/blob/c859bbf7b53cb59df1837612a8c330dfb4147392/lib/rack/handler/webrick.rb#L98-L100
Another method should be used instead of res[...]=:
res.cookies.push WEBrick::Cookie.new("1", "2")
res.cookies.push WEBrick::Cookie.new("3", "4")
res.cookies.push WEBrick::Cookie.new("1", "5")

Print only specific headers using Curb gem [duplicate]

This question already has an answer here:
Get response headers from Curb
(1 answer)
Closed 8 years ago.
I have a question about Ruby gem - Curb. I'm playing around with this gem and have this piece of code:
require 'curb'
require 'colorize'
def err(msg)
puts
puts msg.red
puts 'HOWTO: '.white + './script.rb <domain>'.red
puts
end
target = ARGV[0] || err("You forgot something....")
Curl::Easy.perform(target) do |curl|
curl.headers["User-Agent"] = "Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.7.7) Gecko/20050421"
curl.verbose = true
end
For example, when I try it on google.com, I get this headers (I don't put whole results from script):
Host: google.com
Accept: */*
User-Agent: Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.7.7) Gecko/20050421
* STATE: DO => DO_DONE handle 0x1c8dd80; (connection #0)
* STATE: DO_DONE => WAITPERFORM handle 0x1c8dd80; (connection #0)
* STATE: WAITPERFORM => PERFORM handle 0x1c8dd80; (connection #0)
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 302 Found
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Location: https://www.google.cz/?gfe_rd=cr&ei=2stTVO2eJumg8we6woGoCg
< Content-Length: 259
< Date: Fri, 31 Oct 2014 17:50:18 GMT
< Server: GFE/2.0
< Alternate-Protocol: 443:quic,p=0.01
My question, Is there any way, how to print only a specific headers via Curb? For example, I'd like only this headers on output, like this:
Content-Type: text/html; charset=UTF-8
Location: https://www.google.cz/?gfe_rd=cr&ei=2stTVO2eJumg8we6woGoCg
Server: GFE/2.0
And nothing anymore. Is there any how to to do it via this gem? Or if you have any ideas how to do it using some another gem, let me know.
It's not the most difficult thing to just parse it yourself.
That's exactly what "Get response headers from Curb" proposes.

This is how google bot see my webpage (as seen in webmaster tool - fetch as google)

I have a joomla website. In my google webmaster tools, this is how google bots fetched my the disclaimer page on my site. What does it mean? I don't see any content here.
My real page is this: http://www.asklaw.in/disclaimer.
(I am referring to this page as an example. Other pages also do not show any conetnt)
I don't see any content on this page as fetched by google bot.
Fetch as Google
This is how Googlebot fetched the page. URL:
http://www.asklaw.in/disclaimer
Date: Friday, February 14, 2014 at 12:32:33 AM PST
Googlebot Type: Web
Download Time (in milliseconds): 407
HTTP/1.1 200 OK
Server: nginx/1.4.5
Date: Fri, 14 Feb 2014 08:32:34 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie:
226d2339faeab0a35cea40673655bfc1=ea6579466180b66de9e73781d5179047;
path=/ Cache-Control: max-age=3600
Expires: Fri, 14 Feb 2014 09:32:34 GMT
Content-Encoding: gzip
jos-Warning:
JLIB_APPLICATION_ERROR_COMPONENT_NOT_LOADING JSite ->
initialise() # /home4/pawanhg/public_html/asklaw.in/index.php:30
JApplication -> initialise() #
/home4/pawanhg/public_html/asklaw.in/includes/application.php:116 JApplication -> triggerEvent() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/application/application.php:230 JDispatcher -> trigger() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/application/application.php:642 JEvent -> update() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/event/dispatcher.php:161 call_user_func_array() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/event/event.php:71 plgSystemAdmintoolsPro -> onAfterInitialise()
plgSystemAdmintoolsPro -> IPFiltering() #
/home4/pawanhg/public_html/asklaw.in/plugins/system/admintools/admintools/pro.php:136 JError :: raiseError() #
/home4/pawanhg/public_html/asklaw.in/plugins/system/admintools/admintools/pro.php:676 JError :: raise() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/error/error.php:251 JError :: throwError() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/error/error.php:176 call_user_func_array() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/error/error.php:214 JError :: handleCallback() call_user_func() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/error/error.php:765 plgSystemRedirect :: handleError() JError ::
customErrorPage() #
/home4/pawanhg/public_html/asklaw.in/plugins/system/redirect/redirect.php:109 JDocumentError -> render() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/error/error.php:798 JDocumentError -> _loadTemplate() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/document/error/error.php:107 require_once() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/document/error/error.php:135 require() #
/home4/pawanhg/public_html/asklaw.in/templates/yoo_nano3/error.php:19 Warp\Joomla\Helper\SystemHelper -> init() #
/home4/pawanhg/public_html/asklaw.in/templates/yoo_nano3/warp.php:33 Warp\Joomla\Helper\SystemHelper -> initSite() #
/home4/pawanhg/public_html/asklaw.in/templates/yoo_nano3/warp/systems/joomla/src/Warp/Joomla/Helper/SystemHelper.php:119 JSite -> getParams() #
/home4/pawanhg/public_html/asklaw.in/templates/yoo_nano3/warp/systems/joomla/src/Warp/Joomla/Helper/SystemHelper.php:139 JComponentHelper :: getParams() #
/home4/pawanhg/public_html/asklaw.in/includes/application.php:358 JComponentHelper :: getComponent() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/application/component/helper.php:92 JComponentHelper :: _load() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/application/component/helper.php:43 JError :: raiseWarning() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/application/component/helper.php:415 JError :: raise() #
/home4/pawanhg/public_html/asklaw.in/libraries/joomla/error/error.php:276
JLIB_APPLICATION_ERROR_COMPONENT_NOT_LOADING is what you need to investigate further.
This Joomla Forum posts indicates the issue may be with having uninstalled a component, or perhaps not having the latest version installed, and there being a bug - http://forum.joomla.org/viewtopic.php?f=579&t=578754
Example components mentioned include Kunena forum component, Zoo Theme templates and extensions (which you have as the Warp template framework is theirs), and some others.
If you had that installed, it may not have not removed everything and that is then triggering the page to look for something else.
Check anything you've removed was removed entirely (components, plugins, modules).

Resources