how to skip some file type while crawling with scrapy? - mime

I want to skip some file type link .exe .zip .pdf while crawling with scrapy, but don't want to use Rule with specific url regular. How?
Update:
Due to that it's hard to decide whether to follow this link just by Content-Type in response when the body hasn't been downloaded. I change to drop url in downloader middleware. thanks Peter and Leo.

If you go to linkextractor.py within the Scrapy root directory, you will see the following:
"""
Common code and definitions used by Link extractors (located in
scrapy.contrib.linkextractor).
"""
# common file extensions that are not followed if they occur in links
IGNORED_EXTENSIONS = [
# images
'mng', 'pct', 'bmp', 'gif', 'jpg', 'jpeg', 'png', 'pst', 'psp', 'tif',
'tiff', 'ai', 'drw', 'dxf', 'eps', 'ps', 'svg',
# audio
'mp3', 'wma', 'ogg', 'wav', 'ra', 'aac', 'mid', 'au', 'aiff',
# video
'3gp', 'asf', 'asx', 'avi', 'mov', 'mp4', 'mpg', 'qt', 'rm', 'swf', 'wmv',
'm4a',
# other
'css', 'pdf', 'doc', 'exe', 'bin', 'rss', 'zip', 'rar',
]
However, since this applies to the linkextractor (and you don't want to use Rules), I am not sure that this will solve your problem (I just realized you specified that you didn't want to use Rules. I thought you had asked how to change the file-extension restrictions without needing to specify directly in a rule).
The good news is, you can also build your own downloader middleware and drop any/all requests to urls which have an undesirable extension. See Downloader Middlerware
You can get the requested url by accessing the request object's url attribute as follows: request.url
Basically, search the end of the string for '.exe' or whatever extension you want to drop, and if it contains said extentions, return an IgnoreRequest exception, and the request will immediately be dropped.
UPDATE
In order to process the request prior to it being downloaded, you need to make sure you define the 'process_request' method within your custom downloader middleware.
According to the Scrapy documentation
process_request
This method is called for each request that goes through the download
middleware.
process_request() should return either None, a Response object, or a
Request object.
If it returns None, Scrapy will continue processing this request,
executing all other middlewares until, finally, the appropriate
downloader handler is called the request performed (and its response
downloaded).
If it returns a Response object, Scrapy won’t bother calling ANY other
request or exception middleware, or the appropriate download
function; it’ll return that Response. Response middleware is always
called on every Response.
If it returns a Request object, the returned request will be
rescheduled (in the Scheduler) to be downloaded in the future. The
callback of the original request will always be called. If the new
request has a callback it will be called with the response
downloaded, and the output of that callback will then be passed to the
original callback. If the new request doesn’t have a callback, the
response downloaded will be just passed to the original request
callback.
If it returns an IgnoreRequest exception, the entire request will be
dropped completely and its callback never called.
So essentially, just create a downloader class, add a method class process_request, which takes a request object and spider object as parameters. Then return the IgnoreRequest exception if the url contains unwanted extensions.
This should all occur prior to the page being downloaded. However, if you are wanting to process the response headers instead, than a request will have to be made to the webpage.
You could always implement both a process_request and process_response method in the downloader, with the idea being that obvious extensions will immediately be dropped, and than, if for some reason the url did not contain the file extension, the request would be process and caught in the process_request method (since you could verify in the headers)?

.zip and .pdf are ignored by scrapy by default.
As a general rule you can either configure a rule to include only urls that match your regexp (.htm* in this case):
rules = (Rule(SgmlLinkExtractor(allow=('\.htm')), callback='parse_page', follow=True, ), )
or exclude the ones that match a regexp:
rules = (Rule(SgmlLinkExtractor(allow=('.*'), deny=('\.pdf', '\.zip')), callback='parse_page', follow=True, ), )
Read the documentation for more information.

I built this Middleware to exclude any response type that isn't in a whitelist of regular expressions:
from scrapy.http.response.html import HtmlResponse
from scrapy.exceptions import IgnoreRequest
from scrapy import log
import re
class FilterResponses(object):
"""Limit the HTTP response types that Scrapy dowloads."""
#staticmethod
def is_valid_response(type_whitelist, content_type_header):
for type_regex in type_whitelist:
if re.search(type_regex, content_type_header):
return True
return False
def process_response(self, request, response, spider):
"""
Only allow HTTP response types that that match the given list of
filtering regexs
"""
# to specify on a per-spider basis
# type_whitelist = getattr(spider, "response_type_whitelist", None)
type_whitelist = (r'text', )
content_type_header = response.headers.get('content-type', None)
if not content_type_header or not type_whitelist:
return response
if self.is_valid_response(type_whitelist, content_type_header):
return response
else:
msg = "Ignoring request {}, content-type was not in whitelist".format(response.url)
log.msg(msg, level=log.INFO)
raise IgnoreRequest()
To use it, add it to settings.py:
DOWNLOADER_MIDDLEWARES = {
'[project_name].middlewares.FilterResponses': 999,
}

Related

Does AWS apigateway change http body? How can I stop it from doing this?

Does AWS apigateway change http body? How can I stop it from doing this?
My application:
(1) A front end "UI" that sends a "http request" using "POST method" that contains a "zip file" in "body" through "form-data".
(2) AWS "apigateway" receives this request and forward it to "Lambda Proxy"
(3) AWS "Lambda" implemented by python coding receives this request and decompresses this zip file to a temporary folder.
The problem I'm facing:
(1) and (2) works fine, but in (3) the pythong program at lambda failed to decompress the file.
My finding:
It seems that when sending from the "UI" the body contains the binary data of the zip file
like below:
"PK\x03\x04\n\x00\x00\x00\x00\x00\xd6;TO\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00x2.txtPK\x03\x04\n\x00\x00\x00\x00\x00\xd6;TO\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00x1.txtPK\x01\x02\x14\x00\n\x00\x00\x00\x00\x00\xd6;TO\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00x2.txtPK\x01\x02\x14\x00\n\x00\x00\x00\x00\x00\xd6;TO\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00$\x00\x00\x00x1.txtPK\x05\x06\x00\x00\x00\x00\x02\x00\x02\x00h\x00\x00\x00H\x00\x00\x00\x00\x00"
But at (3) the python code at lambda, if we just simply returns the response like below:
response = {
"statusCode": 200,
"headers": {
"lambda-response": str(event["body"])
},
"body": "",
"isBase64Encoded": False
}
return response
will find that the binary data in the body,
seems like apigateway has changed the content
from:
"PK\x03\x04\n\x00\x00\x00\x00\x00\xd6;TO\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00x2.txtPK\x03\x04\n\x00\x00\x00\x00\x00\xd6;TO\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00x1.txtPK\x01\x02\x14\x00\n\x00\x00\x00\x00\x00\xd6;TO\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00x2.txtPK\x01\x02\x14\x00\n\x00\x00\x00\x00\x00\xd6;TO\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00$\x00\x00\x00x1.txtPK\x05\x06\x00\x00\x00\x00\x02\x00\x02\x00h\x00\x00\x00H\x00\x00\x00\x00\x00"
into:
"PK\u0003\u0004\n\u0000\u0000\u0000\u0000\u0000\ufffd;TO\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0006\u0000\u0000\u0000x2.txtPK\u0003\u0004\n\u0000\u0000\u0000\u0000\u0000\ufffd;TO\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0006\u0000\u0000\u0000x1.txtPK\u0001\u0002\u0014\u0000\n\u0000\u0000\u0000\u0000\u0000\ufffd;TO\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0006\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000
\u0000\u0000\u0000\u0000\u0000\u0000\u0000x2.txtPK\u0001\u0002\u0014\u0000\n\u0000\u0000\u0000\u0000\u0000\ufffd;TO\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0006\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000
\u0000\u0000\u0000$\u0000\u0000\u0000x1.txtPK\u0005\u0006\u0000\u0000\u0000\u0000\u0002\u0000\u0002\u0000h\u0000\u0000\u0000H\u0000\u0000\u0000\u0000\u0000\r\n"
Which is weird, what can I do to stop this?
(2019/12/17 update) below the lambda code I'm using.
import json # to decode json
import os # file IO
import shutil # file IO (use this to recursively force remove a directory)
print('Loading function')
def decompress_zip_file(src_file_path, dest_dir_path):
'''
Decompress a zip file into a directory.
Args:
src_file_path (Srting): source zip file's path.
dest_dir_path (Srting): the destination of the output directory.
Returns:
isSuccess (bool): the operation is successful or not.
'''
error_msg = "Nothing."
try:
if(os.path.isdir(dest_dir_path)):
shutil.rmtree(dest_dir_path)
with zipfile.ZipFile(src_file_path, 'r') as zip_ref:
zip_ref.extractall(dest_dir_path)
except Exception as ep:
error_msg = "Error in decompress_zip_file(), ep={}:{}".format(type(ep).__name__, str(ep))
print(error_msg)
return (False, error_msg)
return (True, error_msg)
def decompress_zip_file_from_content_in_binary(src_file_in_binary, dest_dir_path):
'''
Decompress a zip file content into a directory.
Args:
src_file_in_binary (byte array): source zip file's content in binary format.
dest_dir_path (Srting): the destination of the output directory.
Returns:
isSuccess (bool): the operation is successful or not.
'''
# write the obtained binary data into a tmp zip file
tmp_file_path = "/tmp/tmp.zip"
if(os.path.isfile(tmp_file_path)):
os.remove(tmp_file_path)
output_file = open(tmp_file_path, 'wb')
output_file.write(src_file_in_binary)
output_file.close()
(isSuccess, error_msg) = decompress_zip_file(tmp_file_path, dest_dir_path)
return (isSuccess, error_msg)
def convert_from_http_body_encoding_to_local_binary(extracted_file_from_http_body_str):
'''
Extract the file (in binary string format) from event['body'] encoding to local binary encoding.
Args:
extracted_file_from_http_body_str (string): the event['body'] file (in binary string format),.
Returns:
zipfile_binary1 (binary array): the conversion result.
'''
zipfile_binary1 = bytes(extracted_file_from_http_body_str, encoding = "ascii") # convert into a zipfile in binary format
return zipfile_binary1
def extract_zipfile_binary_from_body(body_str):
'''
Extract the zipfile (in binary format) from event['body'] string.
Args:
body_str (string): the event['body'] string.
Returns:
(binary array): the conversion result.
'''
retValue = ""
tmpArray = body_str.split("application/zip") # split the content based on MIME part field data; cut the head
if(len(tmpArray) > 1):
retValue += "entered-Lv1."
tmpArray = tmpArray[1].split("PK") # split the content based on zip file header.
if(len(tmpArray) > 1):
retValue += "entered-Lv2."
zipfile_str = "PK" + 'PK'.join(tmpArray[1:]) # add back the zip file header
tmpArray = zipfile_str.split("------WebKitFormBoundary") # split the content based on MIME part field data; cut the tail
if(len(tmpArray) > 1):
zipfile_str = tmpArray[0]
zipfile_binary = convert_from_http_body_encoding_to_local_binary(zipfile_str)
retValue = zipfile_binary
return retValue
def handler(event, context):
'''Provide an event that contains the following keys:
- operation: one of the operations in the operations dict below
- payload: a parameter to pass to the operation being performed
'''
# set the mapping table for "operation" x "return value"
operations = {
'unzip': lambda x: decompress_zip_file_from_content_in_binary(**x), # unzip an uploaded file
'ping': lambda x: 'pong' # respond to ping req.
}
# because we use "Lambda Proxe", means we have api-gateway forward the whole packet without resolving it for lambda.
event_headers = event["headers"]
operation = event_headers['operation']
event_body = event["body"]
if(operation == 'unzip'):
src_file_in_binary = extract_zipfile_binary_from_body(event_body)
payload_json = {}
payload_json['src_file_in_binary'] = src_file_in_binary
payload_json['dest_dir_path'] = "/tmp/tmp_zipfile_output"
event_headers["payload"] = payload_json
if operation in operations:
responseBody = operations[operation](event_headers.get('payload'))
response = {
"statusCode": 200,
"headers": {
"lambda-response": str(responseBody) # the api-gateway will forward the header to the front end.
},
"body": "",
"isBase64Encoded": False
}
return response
else:
raise ValueError('Unrecognized operation "{}"'.format(operation))
Below is a response from AWS support. LGTM. Leave it here so that people can see the solution to this issue in the future.
=====================Below is the response from AWS support ==================
Hi,
Thank you for contacting AWS Premium Support. I am Jyoti, and I will assist you with this case today.
From the case correspondence, I understand that you are concerned that API Gateway modifies
the binary data payload before proxying to your Lambda function. Please correct me if my understanding is wrong.
Expected Behaviour:
API gateway does modify the binary data payload into UTF-8 encoded JSON strings if
the API is configured at its default settings. Hence this is an expected behaviour.
Kindly note, as per [1], we must configure the API to support binary payloads for
our API in API Gateway. API Gateway can not send binary as is, since it has to send
a JSON body to the lambda proxy. Hence, it encodes the data/payload in UTF-8 by default.
Solution:
In order to overcome the aforementioned challenge, we need to add the desired
binary media types (application/zip in this case) to the binaryMediaTypes list
on the RestApi resource's settings page. For further information on how to achieve
this, please refer here --> [2]. If this property is not defined, the payloads
are handled as UTF-8 encoded JSON strings as mentioned in [1].
This is why the file in your request looks UTF-8 encoded. After configuring the API,
the event received by the Lambda would be a Base64-encoded string.
If you want to conduct operations on this object (the encoded request body or 'event["body"]'),
then you may decode the base64-encoded string to its orginal binary form by following
the below lines (in case of python runtime) :
import base64
coded_string = str(event["body"])
base64.b64decode(coded_string)
Troubleshooting:
I tried to replicate your setup in my environment. Instead of the frontend 'UI' of the application,
I used Postman as a client, while the rest of the setup (API Gateway and Lambda) are similar.
I am making a POST request to my API from Postman, with the request headers 'Content-Type' and 'Accept',
both set to the value 'application/zip', which is the binary media type that is being sent and
also being expected in the response. My API has been configured to support binary media types being
passed in the request body. I have added 'application/zip' in the binaryMediaTypes list for the API.
Finally, in the Lambda function I am decoding the base64-encoded request body (i.e. event["body"])
to its original binary form by using the base64 library (in python).
If you still want to confirm the consistency of your request's form-data out by returning the binary
data in your response, you can refer to the following snippet:
response {
'isBase64Encoded': True, #Ensure the body is base encoded
'statusCode': 200,
'headers': { "Content-Type": "applicaiton/zip" }, #Define the Content-Type
'body': event["body"] #Response Body returns the Base64-encoded value
}
We set the isBase64Encoded parameter to True and API Gateway automatically decodes the
response body depending on the Content-Type (i.e. the binary data/media type) that the
client (in my case Postman), is set to receive (i.e. application/zip). Kindly note, the 'Accept'
header that I had sent in my header, is to validate that the response body contains the binary
data type, the request was made for.
The above response body was the same as the request body binary data that was first sent
through the API, in my setup.
Hope I have addressed your concerns. However, if you still need help with the implementation,
please contact us again and I will be happy to assist you.
References:
=-=-=-=--=-=-=-=-=-=
[1] Support Binary Payloads in API Gateway: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings.html
[2] Enable Binary Support Using the API Gateway Console: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html
Best regards,
Jyoti Prakash P.
Amazon Web Services
2019/12/20 update
I realize that my content type is actually multipart instead of application/zip so I modified again the setting then it worked.
Below is the help from AWS support. Many thanks to their help.
Hi,
Thanks a lot for elaborating your application flow and the logs. I understand now that your HTTP Request header 'content-type' is set to 'multipart/form-data'. I agree that for a web form to upload a file it is quite common to set content type as form-data and AWS API Gateway does support it. You would like to know if you could prevent UTF-8 encoding without changing the front end code. Please correct me if my understanding is wrong.
For the ease of discussion, I would like to separate the approach of troubleshooting for the HTTP request and response.
For the request to the API:
Please add 'multipart/form-data' as one of the values in the binaryMediaType list in your "API settings page in the API Gateway console. You would not have to alter your code or HTTP request or any of it's headers. Kindly note to handle binary media/data in API Gateway, the HTTP Request Content-Type header must match the values in binaryMediaType list.
In your use case, if you want to send the binary media back in a response for your request, the HTTP Request 'Content-Type' and 'Accept' headers, the binaryMediaType value of the API and the HTTP Response 'Content-Type' must all be set to 'multipart/form-data'. I tried the above and it worked for me with Postman Client. The 'boundary' directive is set up by Postman automatically if the HTTP Request 'Content-Type' is set to 'multipart/form-data'. Hence, you would have to only add 'multipart/form-data' in the 'binaryMediaType' list. Please have a look at my HTTP request, below:
POST /stg-with-logs HTTP/1.1
Host: <some-api-id>.execute-api.us-east-1.amazonaws.com
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Accept: multipart/form-data
Cache-Control: no-cache
Postman-Token: 123b64f9-5669-f794-b9df-34a7561e9708
------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="File"; filename="archive.zip"
Content-Type: application/zip
------WebKitFormBoundary7MA4YWxkTrZu0gW--
For the response from the API:
I noticed while going through your API Gateway Logs, the header 'isBase64Encoded' was not set. Kindly set that to true. API Gateway automatically decodes any base64-encoded string in the body of your HTTP response, if 'isBase64Encoded' is set to true. Please have a look at the HTTP Response from my lambda below:
(a6729f56-b245-45a4-9ac4-7e00b23c8957) Endpoint response body before transformations:
{
"isBase64Encoded": true,
"statusCode": 200,
"headers": {
"Content-Type": "multipart/form-data",
"Accpet": "multipart/form-data"
},
"body": "LS0tLS0tV2ViS2l0Rm9ybUJvdW5kYXJ5SmxkSW1aV1lHczlSTndPWQ0KQ29udGVudC1EaXNwb3NpdGlvbjogZm9ybS1kYXRhOyBuYW1lPSJGaWxlIjsgZmlsZW5hbWU9ImFyY2hpdmUuemlwIg0KQ29udGVudC1UeXBlOiBhcHBsaWNhdGlvbi96aXANCg0KUEsDBBQAAAAIAKF4kE9/Mo7/XgAAAJcAAAAaABwASGVsbG8tV29ybGQtNjY3MzMxNTI4MS50eHRVVAkAA8ZP910SUPdddXgLAAEEHZHreQTMewNxNY1BDgIxDAPvvIVPOY3SEC+9WCrfJ13EZWTNHKwKkzMmxIp5dpsnFMlqrjzBF/SKxCW2/8dl3ttGGjTqnkdMG+Wwj96jA3/YJsC2QF9iesuLUXPfv80KrpaVYeDjC1BLAQIeAxQAAAAIAKF4kE9/Mo7/XgAAAJcAAAAaABgAAAAAAAEAAACkgQAAAABIZWxsby1Xb3JsZC02NjczMzE1MjgxLnR4dFVUBQADxk/3XXV4CwABBB2R63kEzHsDcVBLBQYAAAAAAQABAGAAAACyAAAAAAANCi0tLS0tLVdlYktpdEZvcm1Cb3VuZGFyeUpsZEltWldZR3M5Uk53T1kNCkNvbnRlbnQtRGlzcG9zaXRpb246IGZvcm0tZGF0YTsgbmFtZT0iVGVzdCBEYXRhIg0KDQpUZXN0aW5nIEJvdW5kYXJ5IGluIG11bHRpcGFydC9mb3JtLWRhdGENCi0tLS0tLVdlYktpdEZvcm1Cb3VuZGFyeUpsZEltWldZR3M5Uk53T1ktLQ0K"
}
Along with this correspondence I am attaching my API Gateway Swagger file and Lambda function code for your reference. The setup worked fine for me and I was able to return the binary payload upon making an HTTP Request. If you want to test it out in your environment, please set the appropriate credentials and lambda uri in the Swagger file.
Hope this addresses your concern. However, if the issue still persists or you have any further questions, please contact us again and I will be happy to assist you.
To see the file named 'binaryPost-stg-with-logs-oas30-apigateway.yaml,python-binary-response.py' included with this correspondence, please use the case link given below the signature.
Best regards,
Jyoti Prakash P.
Amazon Web Services
Check out the AWS Support Knowledge Center, a knowledge base of articles and videos that answer customer questions about AWS services: https://aws.amazon.com/premiumsupport/knowledge-center/?icmpid=support_email_category

(REDDIT) Error trying to subscribe to subreddits via API

I know that Snoo seems to be unmaintained, but I wanted to use a ruby framework since I'm trying to improve my Ruby skill.
I'm trying to add some functionality starting with subscribing and unsubscribing to subreddits. Link to API doc.
My first attempt was with the built-in post method which returned a 404 error
def subscribe(subreddit)
logged_in?
post('/api/subscribe.json',body:{uh: #modhash, action:'sub', sr: subreddit, api_type: 'json'})
end
Since the built-in post method was giving me a 404 I decided to try the HTTParty post method:
def subscribe(subreddit)
logged_in?
HTTParty.post('http://www.reddit.com/api/subscribe.json',body:{uh: #modhash, action:'sub', sr: subreddit, api_type: 'json'})
end
That returns this:
pry(main)> reddit.subscribe('/r/nba')
=> {"json"=>{"errors"=>[["USER_REQUIRED", "please login to do that", nil]]}}
Does anyone know if I need to pass more info in the body or if I'm just sending a badly formed request? Thanks!
Also, before running "reddit.subscribe" I have verified that I'm logged in with with a cookie, a modhash, can access my account info, etc.
Solution found:
def subscribe(subreddit)
#query the subreddit for it's 'about' info and get json back
subreddit_json = self.subreddit_info(subreddit)
#build the coded unique identifier for the targeted subreddit
subreddit_id = subreddit_json['kind'] + "_" + subreddit_json['data']['id']
#send post request to server
server_response = self.class.post('/api/subscribe.json',
body:{uh:#modhash, action:'sub', sr: subreddit_id, api_type:'json'})
end
The Reddit API doesn't accept the subreddit name as the value passed with 'sr', (e.g. sr:'/r/funny'). It requires the subreddit "type" (which is always 't5' for subreddits) and unique forum id. The parameter passed would look something like: sr: "t5_2qo4s". This information is available if you go to your target subreddit and add about.json, e.g., www.reddit.com/r/funny/about.json

Random "SignatureDoesNotMatch" forbidden requests even when uploading same file via POST to Amazon S3

I am uploading the same file multiple times "File.txt". Sometimes it successfully uploads to S3 and sometimes it raises:
SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method
I don't know what to test to discover the problem. Is it really a policy + signature problem as the exception says? So why sometimes can I upload?
Before upgrading to Rails 4 and updating some libs the same configurations always worked.
But the updates in this case not even affect a simple POST form...
The POST params request sent with a failed upload were:
key: attachments/ad6d5c8c-f9ae-48ab-a9ff-c1b199a91d1d/File.txt
acl: public-read
policy: XXX==
signature: YYY=
Content-Type: binary/octet-stream
AWSAccessKeyId: -- Hidden --
...And with a successful one:
key: attachments/368f5497-6e11-4f07-b379-80020e902013/File.txt
acl: public-read
policy: XXX==
signature: ZZZ=
Content-Type: binary/octet-stream
AWSAccessKeyId: -- Hidden --
It seems right just like before. The only attribute that changed was the signature because of the random guid generation in the path...
Here are some other methods for generation policies:
def fields
{
key: key,
acl: acl,
policy: policy,
signature: signature,
"Content-Type" => content_type || "binary/octet-stream",
"AWSAccessKeyId" => S3_CONFIG["access_key_id"]
}
end
def policy
Base64.encode64(policy_data.to_json).gsub("\n", "")
end
def policy_data
{
expiration: expiration,
conditions: [
{bucket: S3_CONFIG["bucket"]},
["starts-with", "$key", store_dir],
{acl: acl},
["starts-with", "$Content-Type", ''],
["content-length-range", min_file_size, max_file_size]
]
}
end
def signature
Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest::Digest.new('sha1'),
S3_CONFIG["secret_access_key"], policy
)
).gsub("\n", "")
end
Any help I appreciate
Thanks
Let me try to sum up the things. This would be a draft of solution only, since I don’t have the env here to test it.
• According to this, one should encode the signature with url encode:
require 'open-uri'
signature = URI::encode(signature)
• Gsubbing "\n" in policy/signature looks a bit suspicious. I would try to either not touch it at all, or to remove all possible LF/CR on all the operating system environments with .gsub("\n|\r")
I would start trying with encoded not gsubbed policy/signature and then continue with gsubbed versions.
Hope it helps.
I had the same issue but I figured it out by printing the decoding the 'policy' in the fields (form) and in signature.
The answer is that the policy method is called twice (in fields and in signature), each time the policy is generated with different expiration value. That's why it's random and doesn't happen every time.
The solution is easy - keep policy in a variable for later access:
def policy
#policy ||= Base64.encode64(policy_data.to_json).gsub("\n", "")
end
A fixed expiration value should work as well.

Sending An HTTP Request using Intersystems Cache

I have the following Business Process defined within a Production on an Intersystems Cache Installation
/// Makes a call to Merlin based on the message sent to it from the pre-processor
Class sgh.Process.MerlinProcessor Extends Ens.BusinessProcess [ ClassType = persistent, ProcedureBlock ]
{
Property WorkingDirectory As %String;
Property WebServer As %String;
Property CacheServer As %String;
Property Port As %String;
Property Location As %String;
Parameter SETTINGS = "WorkingDirectory,WebServer,Location,Port,CacheServer";
Method OnRequest(pRequest As sgh.Message.MerlinTransmissionRequest, Output pResponse As Ens.Response) As %Status
{
Set tSC=$$$OK
Do ##class(sgh.Utils.Debug).LogDebugMsg("Packaging an HTTP request for Saved form "_pRequest.DateTimeSaved)
Set dateTimeSaved = pRequest.DateTimeSaved
Set patientId = pRequest.PatientId
Set latestDateTimeSaved = pRequest.LatestDateTimeSaved
Set formName = pRequest.FormName
Set formId = pRequest.FormId
Set episodeNumber = pRequest.EpisodeNumber
Set sentElectronically = pRequest.SentElectronically
Set styleSheet = pRequest.PrintName
Do ##class(sgh.Utils.Debug).LogDebugMsg("Creating HTTP Request Class")
set HTTPReq = ##class(%Net.HttpRequest).%New()
Set HTTPReq.Server = ..WebServer
Set HTTPReq.Port = ..Port
do HTTPReq.InsertParam("DateTimeSaved",dateTimeSaved)
do HTTPReq.InsertParam("HospitalNumber",patientId)
do HTTPReq.InsertParam("Episode",episodeNumber)
do HTTPReq.InsertParam("Stylesheet",styleSheet)
do HTTPReq.InsertParam("Server",..CacheServer)
Set Status = HTTPReq.Post(..Location,0) Quit:$$$ISERR(tSC)
Do ##class(sgh.Utils.Debug).LogDebugMsg("Sent the following request: "_Status)
Quit tSC
}
}
The thing is when I check the debug value (which is defined as a global) all I get is the number '1' - I have no idea therefore if the request has succeeded or even what is wrong (if it has not)
What do I need to do to find out
A) What is the actual web call being made?
B) What the response is?
There is a really slick way to get the answer the two questions you've asked, regardless of where you're using the code. Check the documentation out on the %Net.HttpRequest object here: http://docs.intersystems.com/ens20102/csp/docbook/DocBook.UI.Page.cls?KEY=GNET_http and the class reference here: http://docs.intersystems.com/ens20102/csp/documatic/%25CSP.Documatic.cls?APP=1&LIBRARY=ENSLIB&CLASSNAME=%25Net.HttpRequest
The class reference for the Post method has a parameter called test, that will do what you're looking for. Here's the excerpt:
method Post(location As %String = "", test As %Integer = 0, reset As %Boolean = 1) as %Status
Issue the Http 'post' request, this is used to send data to the web server such as the results of a form, or upload a file. If this completes correctly the response to this request will be in the HttpResponse. The location is the url to request, e.g. '/test.html'. This can contain parameters which are assumed to be already URL escaped, e.g. '/test.html?PARAM=%25VALUE' sets PARAM to %VALUE. If test is 1 then instead of connecting to a remote machine it will just output what it would have send to the web server to the current device, if test is 2 then it will output the response to the current device after the Post. This can be used to check that it will send what you are expecting. This calls Reset automatically after reading the response, except in test=1 mode or if reset=0.
I recommend moving this code to a test routine to view the output properly in terminal. It would look something like this:
// To view the REQUEST you are sending
Set sc = request.Post("/someserver/servlet/webmethod",1)
// To view the RESPONSE you are receiving
Set sc = request.Post("/someserver/servlet/webmethod",2)
// You could also do something like this to parse your RESPONSE stream
Write request.HttpResponse.Data.Read()
I believe the answer you want to A) is in the Server and Location properties of your %Net.HttpRequest object (e.g., HTTPReq.Server and HTTPReq.Location).
For B), the response information should be in the %Net.HttpResponse object stored in the HttpResponse property (e.g. HTTPReq.HttpResponse) after your call is completed.
I hope this helps!
-Derek
(edited for formatting)
From that code sample it looks like you're using Ensemble, not straight-up Cache.
In that case you should be doing this HTTP call in a Business Operation that uses the HTTP Outbound Adapter, not in your Business Process.
See this link for more info on HTTP Adapters:
http://docs.intersystems.com/ens20102/csp/docbook/DocBook.UI.Page.cls?KEY=EHTP
You should also look into how to use the Ensemble Message Browser. That should help with your logging needs.

How to pass cookies from one page to another using curl in Ruby?

I am doing a video crawler in ruby. In there I have to log in to a page by enabling cookies and download pages. For that I am using the CURL library in ruby. I can successfully log in, but I can't download the pages inside that with curl. How can I fix this or download the pages otherwise?
My code is
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_post(1st url,field)
curl.perform
curl = Curl::Easy.perform(2nd url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_get
code = curl.body_str
What I've seen in writing my own similar "post-then-get" script is that ruby/Curb (I'm using version 0.7.15 with ruby 1.8) seems to ignore the cookiejar/cookiefile fields of a Curl::Easy object. If I set either of those fields and the http_post completes successfully, no cookiejar or cookiefile file is created. Also, curl.cookies will still be nil after your curl.http_post, however, the cookies ARE set within the curl object. I promise :)
I think where you're going wrong is here:
curl = Curl::Easy.perform(2nd url)
The curb documentation states that this creates a new object. That new object doesn't have any of your existing cookies set. If you change your code to look like the following, I believe it should work. I've also removed the curl.perform for the first url since curl.http_post already implicitly does the "perform". You were basically http_post'ing twice before trying your http_get.
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.http_post(1st url,field)
curl.url = 2nd url
curl.http_get
code = curl.body_str
If this still doesn't seem to be working for you, you can verify if the cookie is getting set by adding
curl.verbose = true
Before
curl.http_post
Your Curl::Easy object will dump all the headers that it gets in the response from the server to $stdout, and somewhere in there you should see a line stating that it added/set a cookie. I don't have any example output right now but I'll try to post a follow-up soon.
HTTPClient automatically enables cookies, as does Mechanize.
From the HTTPClient docs:
clnt = HTTPClient.new
clnt.get_content(url1) # receives Cookies.
clnt.get_content(url2) # sends Cookies if needed.
Posting a form is easy too:
body = { 'keyword' => 'ruby', 'lang' => 'en' }
res = clnt.post(uri, body)
Mechanize makes this sort of thing really simple (It will handle storing the cookies, among other things).

Resources