UTF-8 encoding Google Apps Email Settings API - utf-8

I've been using Google Apps Email Settings API for a while but I came to a problem when I tried to insert aliases, signatures or any information with "ñ" or "Ñ". It adds garbage instead of those characters and it doesn't seem to respect the charset specified (utf-8) in the HTTP header nor the XML character encoding.
I have tried via my own python code and also using OAuth Playground[1] but it's been impossible to properly add the mentioned characters.
¿Any idea/suggestion?
Thanks in advance.
EDIT: It seems that the problem is not in the request but in the response. I have encoded it successfully in my code but it should be also fixed in OAuth Playground.
[1] https://developers.google.com/oauthplayground/

I have succesfully called Google API client methods using UTF8-encoded strings, so it is definitely an issue with your Python setup.
I would workaround this issue sending Unicode strings instead of UTF-8 encoded:
u'literal string' # This is unicode
'encoded utf-8 string'.decode('utf-8') # This is unicode
EDIT: Re-reading your answer it seems that you are making raw HTTP calls with hand-made XML documents. I can't understand why. If it's the way you want to go, take a look into Emails Settings API client code to learn how to build the XML documents.

Related

Uploading PDF to AWS Lambda via API Gateway mangles the bits...why?

I have deployed an AWS Lambda function, written in Python, and AWS API Gateway structure to cause POST requests to an API endpoint to be redirected to my function. I want to upload a PDF document to my function and have it store the document in a S3 bucket. The problem I have is that the payload of any POST request to my API is being UTF-8 encoded. I don't want that but can't figure out the magic mojo to disable encoding of the request payload.
I am testing using curl, with the following command line:
curl -XPOST https://xxxxxxxxxx.execute-api.us-west-1.amazonaws.com/test -H 'content-type: application/pdf' --data-binary #document.pdf
UPDATE: I just found the following article describing how API Gateway and Lambda support uploading binary data:
https://aws.amazon.com/blogs/compute/handling-binary-data-using-amazon-api-gateway-http-apis/
This article suggests that all of the complexities that I discussed in the initial formation of my question (still provided below) should not be necessary. All I should need to do to upload binary content to my Lambda function is insure that my request includes an appropriate Content-Type header. I was already doing that, but I massaged my Curl command a bit (modified above) to define my request in exactly the way that is done in this article. I still get UTF-8 encoded data and NOT base-64 encoded data. I tried uploading a jpeg file rather than a PDF so I was doing exactly what was done in the article. Still no love. I don't get it. This article demonstrates exactly what I'm doing. But I don't get the result it suggests I should. Ggggrrrr.
ORIGINAL POST:
I am using Terraform to define my deployment. I want to cause the PDF to not be encoded/mangled at all. This is my first time using API Gateway, and I'm obviously missing some bit of config. The one thing I'm doing specifically right now to say that I want incoming payloads to be treated as binary is via the binary_media_types argument to my API definition in Terraform:
resource aws_api_gateway_rest_api proxy {
...
binary_media_types = [
"application/pdf",
"application/octet-stream",
"*/*"
]
This sets the Binary Media Types configuration associated with the API I've defined. I've confirmed via the AWS Console that this setting is having the desired effect...I can see these types in the console. I should need just the first item in the list, but I've added the others while I try to figure out the problem here. By adding that wildcard item, I believe that it shouldn't matter what the incoming Content-Type is...all payloads should be being treated as binary.
The other bit of config that I know about that might be important is the "integration contentHandling property". Here is the key bit of AWS docs that seems to explain all this:
I think the case that applies to me here is the one I've highlighted, per what I say above. This says to me that I shouldn't need to do anything else, per the "unspecified" value in the table for "contentHandling. I've tried setting the "contentHandling" argument on the integration record of my Terraform config, like this:
resource aws_api_gateway_integration proxy {
...
passthrough_behavior = "WHEN_NO_MATCH"
content_handling = "CONVERT_TO_BINARY"
}
I first tried only specifying the content_handling value. I've also tried setting that value to "CONVERT_TO_TEXT", hoping to then get base64-encoded data. Neither of these has any effect. I've tried adding the passthrough_behavior value as shown. I've also tried replacing "WHEN_NO_MATCH" with "WHEN_NO_TEMPLATES". Nothing I do changes the behavior. I haven't been able to figure out where these settings would show up in the AWS console. If I knew they were necessary, I'd explore this further. But I don't think I need to set these.
What am I missing? How can I POST a PDF document to my AWS Lambda function through API Gateway and have the payload of the request not be converted in any way? TIA!
NOTE: I am aware of this Q/A: PDF Uploaded via AWS API Gateway getting corrupted. The answer there doesn't apply to me, as I need to avoid having to form-encode the upload. The client code that will eventually be doing the upload is set in stone and sends a POST request with a payload that is just the bytes of the PDF.

French character é is printing as � in jhipster thymeleaf template

I am using Jhipster application email functionality to send mail on user creation. When sending mail in French the character like é is printing wrongly.
These characters are coming from standard messages_fr.properties file.
Obviously its encoding issue but both in email template html and java code we are setting encoding as UTF-8, which should display this correctly.
While debugging I found that in MailService.java class, content loaded by the SpringTemplateEngine's process method have already loaded the character wrongly before setting the encoding as UTF-8.
My code:
String content = templateEngine.process("activationEmail", context);
Looks like I know the root cause of it but as it is Spring's internal API class, I don't know how to fix this issue.
As Gael pointed I found that the exact root cause of issue is that STS is changing the encoding of .properties files to ISO by default. Changing the file back to UTF-8 is the solution of this issue. Thanks Gael for pointing to this direction.

Response rendered as json in IE for browsable apis

On IE when i try to browse the rest apis, i am getting a application/json response instead of api (text/html) response (Returns html response on firefox). I am using django restframework 2.2.5 for this purpose.
I read through the documnets and understood that in order to overcome the problem of broken headers for IE we need to use TemplateHTMLRenderer explicitly in the view, so i have added the following to the class definition of my view but still i am getting a json response. Am i not doing it correctly or i am missing something else?
class CustomReports(generics.GenericAPIView):
`renderer_classes = (renderers.TemplateHTMLRenderer)`
Can you please help in fix the problem so that i get html response in case of IE as well?
Which version of IE are you using? I believe newer versions of IE should send correct Accept headers.
I probably wouldn't bother trying to fix things up to work around IE's broken behavior, but instead just make sure that you're including format suffixes in your urls. Then you can simply use the .api suffix to see the browseable API, or the .json suffix to see the plain json.
Eg instead of http://127.0.0.1:8000/api-root/, use http://127.0.0.1:8000/api-root/.api.

Cookie encoding

I have a problem with cookie encoding. The cookies have some French character inside them. Firefox sends them back correctly, but it seems Chrome has problem with sending them.
Any idea why it happens?
It seems Chrome doesn't support Unicode in its cookies :-/
The correct solution is to
convert the french string to UTF-8 and then
do percent encoding (also known as URL encoding) See http://en.wikipedia.org/wiki/Percent-encoding
This way you can submit any Unicode string via Cookie.
In C# there is a function that does exactly these both things: System.Uri.EscapeDataString()

How to test webservice for unicode handling

Are there test tools available to test if a webservice can handle unicode utf-8 encoded posts? How do I generate utf-8 encoded data?
The sad-but-true answer is that there is no way to know what encoding some program expects if they don't either document it, or provide encoding metadata in whatever protocol you're using.
As for generating utf-8, well, that depends on what programming language you're using.

Resources