Ueberauth plug fails to assign auth and failure to conn - phoenix-framework

I'm utilizing Ueberauth in a library I'm building to simplify the addition of authentication in Phoenix.
I've setup an example repo and you can see the code at in the v2 branch
I’m attempting to implement the Ueberauth Identity strategy right now. It appears that although I’ve added the Ueberauth plug to the controller, the ueberauth_failure and ueberauth_auth are not being set in the conn’s assigns. Instead, conn.assigns is %{}.
I’m expecting that with this example curl request:
curl -i -X POST localhost:4000/identity/callback -d '{"user": {"email": "test#example.com", "password": "password"}}'
at least one of those (ueberauth_failure or ueberauth_auth) will be set?
Ueberauth is configured here: https://github.com/britton-jb/test_sentinel_example/blob/master/config/config.exs#L58
The controller implementing the strategy is https://github.com/britton-jb/sentinel/blob/v2/lib/sentinel/controllers/json/session_controller.ex
The controller and route are exposed in https://github.com/britton-jb/test_sentinel_example/blob/master/web/router.ex#L26
by this macro: https://github.com/britton-jb/sentinel/blob/v2/lib/sentinel.ex#L50

#hassox suggested the correct solution, ueberauth identity's plug must be routed to /auth/identity/ and /auth/identity/callback, as soon as I changed the routing to reflect this everything worked.

Related

Uploading PDF to AWS Lambda via API Gateway mangles the bits...why?

I have deployed an AWS Lambda function, written in Python, and AWS API Gateway structure to cause POST requests to an API endpoint to be redirected to my function. I want to upload a PDF document to my function and have it store the document in a S3 bucket. The problem I have is that the payload of any POST request to my API is being UTF-8 encoded. I don't want that but can't figure out the magic mojo to disable encoding of the request payload.
I am testing using curl, with the following command line:
curl -XPOST https://xxxxxxxxxx.execute-api.us-west-1.amazonaws.com/test -H 'content-type: application/pdf' --data-binary #document.pdf
UPDATE: I just found the following article describing how API Gateway and Lambda support uploading binary data:
https://aws.amazon.com/blogs/compute/handling-binary-data-using-amazon-api-gateway-http-apis/
This article suggests that all of the complexities that I discussed in the initial formation of my question (still provided below) should not be necessary. All I should need to do to upload binary content to my Lambda function is insure that my request includes an appropriate Content-Type header. I was already doing that, but I massaged my Curl command a bit (modified above) to define my request in exactly the way that is done in this article. I still get UTF-8 encoded data and NOT base-64 encoded data. I tried uploading a jpeg file rather than a PDF so I was doing exactly what was done in the article. Still no love. I don't get it. This article demonstrates exactly what I'm doing. But I don't get the result it suggests I should. Ggggrrrr.
ORIGINAL POST:
I am using Terraform to define my deployment. I want to cause the PDF to not be encoded/mangled at all. This is my first time using API Gateway, and I'm obviously missing some bit of config. The one thing I'm doing specifically right now to say that I want incoming payloads to be treated as binary is via the binary_media_types argument to my API definition in Terraform:
resource aws_api_gateway_rest_api proxy {
...
binary_media_types = [
"application/pdf",
"application/octet-stream",
"*/*"
]
This sets the Binary Media Types configuration associated with the API I've defined. I've confirmed via the AWS Console that this setting is having the desired effect...I can see these types in the console. I should need just the first item in the list, but I've added the others while I try to figure out the problem here. By adding that wildcard item, I believe that it shouldn't matter what the incoming Content-Type is...all payloads should be being treated as binary.
The other bit of config that I know about that might be important is the "integration contentHandling property". Here is the key bit of AWS docs that seems to explain all this:
I think the case that applies to me here is the one I've highlighted, per what I say above. This says to me that I shouldn't need to do anything else, per the "unspecified" value in the table for "contentHandling. I've tried setting the "contentHandling" argument on the integration record of my Terraform config, like this:
resource aws_api_gateway_integration proxy {
...
passthrough_behavior = "WHEN_NO_MATCH"
content_handling = "CONVERT_TO_BINARY"
}
I first tried only specifying the content_handling value. I've also tried setting that value to "CONVERT_TO_TEXT", hoping to then get base64-encoded data. Neither of these has any effect. I've tried adding the passthrough_behavior value as shown. I've also tried replacing "WHEN_NO_MATCH" with "WHEN_NO_TEMPLATES". Nothing I do changes the behavior. I haven't been able to figure out where these settings would show up in the AWS console. If I knew they were necessary, I'd explore this further. But I don't think I need to set these.
What am I missing? How can I POST a PDF document to my AWS Lambda function through API Gateway and have the payload of the request not be converted in any way? TIA!
NOTE: I am aware of this Q/A: PDF Uploaded via AWS API Gateway getting corrupted. The answer there doesn't apply to me, as I need to avoid having to form-encode the upload. The client code that will eventually be doing the upload is set in stone and sends a POST request with a payload that is just the bytes of the PDF.

AWS SAM APIGatewayProxyRequestEvent HttpMethod not set when starting API locally

I created an application with a AWS::SERVERLESS::FUNCTION, which has an HttpApi Event attached to it. I thought it would be a good idea to create one lambda per resource, so e.g. Post, Get and Put on /customer are all handled by a single lambda, which decides which action to take using
switch (input.getHttpMethod()) {
case "GET": ...
case "POST: ...
}
So now coming to my problem: When starting an application using sam local start-api my lambda gets called correctly, but neither input.getHttpMethod() nor input.getRequestContext().getHttpMethod() is set.
Given that SAM supports multiple HttpApi events, failing to provide the http method when running the application locally mitigates local development virtually completely. Am I doing something wrong, or is this really not working? I'm using Java btw, I can't tell if this problem exists using other languages as well.
Just in Case:
Is my "one lambda per resource" approach wrong, should every single action have its own lambda?
I found the problem. HttpApi uses "PayloadFormatVersion: 2.0" as default. input was of type com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent, but this represents format 1.0, I had to use com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPEvent to get it to work.
This fails silently because instances are just constructed from json input, and several fields are the same among both classes.

Post a NIFI template via REST?

I have multiple nifi servers that I would like to be able to POST templates to via the REST interface from a script
The "/controller/templates" endpoint appears to be the proper REST endpoint to support POSTing an arbitrary template to my Nifi installation.
The "snippetId" field is what is confusing me, how do I determine "The id of the snippet whose contents will comprise the template"? Does anyone have an example of how I can upload a template "test.xml" to my server without having to use the UI?
The API has moved in 1.0 to:
POST /process-groups/{id}/templates/upload
Example, using Python's requests library:
res = requests.post( "{hostname}/nifi-api/process-groups/{destination_process_group}/templates/upload".format( **args ),
files={"template": open( file_path, 'rb')} )
The provided documentation is somewhat confusing, and the solution I worked out was derived from the nifi api deploy groovy script at https://github.com/aperepel/nifi-api-deploy
Ultimately, to POST a template directly, you can use the following in Python requests
requests.post("%s/nifi-api/controller/templates"%(url,), files={"template":open(filename, 'rb')})
Where filename is the filename of your template and url is the path to your nifi instance. I haven't figured it out in curl directly but this should hopefully get folks with a similar question started!
Edit:
Note that you also can't upload a template with the same name as an existing template. Make sure to delete your existing template before attempting to re-upload. Using the untangle library to parse the XML of the template, the following script works just fine:
import untangle, sys, requests
def deploy_template(filename, url):
p = untangle.parse(filename)
new_template_name=p.template.name.cdata
r=requests.get("%s/nifi-api/controller/templates"%(url,), headers={"Accept":"application/json"})
for each in r.json()["templates"]:
if each["name"]==new_template_name:
requests.delete(each["uri"])
requests.post("%s/nifi-api/controller/templates"%(url,), files={"template":open(filename, 'rb')})
if __name__=="__main__":
deploy_template(sys.argv[1], sys.argv[2])
If you want to POST a template to NiFi via cURL you can use the following command:
curl -iv -F template=#my_nifi_template.xml -X POST http://nifi-host:nifi-port/nifi-api/controller/templates
This will add the template to the NiFi instance with the same name that the template was given when it was generated.
And the -iv is optional - It's just there for debugging purposes.
You can use Nifi Api to upload a template, to do that follow these two steps:
1. Get a token from Nifi Api:
token=$(curl -k -X POST --negotiate -u : https://nifi_hostname:port/nifi-api/access/kerberos)
2. Upload the template file using the token:
curl -k -F template=#template_file.xml -X POST https://nifi_hostname:port/nifi-api/process-groups/Process_group_id/templates/upload -H "Authorization: Bearer $token"
The documentation can be confusing because that endpoint is overloaded and the documentation tool only generates doc for one of them (see NIFI-1113). There is an email thread that addresses the import of a template using curl, so between the above answer and the email thread, hopefully you can find the approach that works for you.
I've implemented a full Python client for doing this in NiPyApi
Key functions for templates are:
[
"list_all_templates", "get_template_by_name", "deploy_template",
"upload_template", "create_pg_snippet", "create_template",
"delete_template", "export_template", 'get_template'
]
The client supports NiFi-1.1.2 - 1.7.1 currently, and NiFi-Registry (which is a lot better than templates for flow deployment)

Tyring to update a URL using Curl on Linux

I am trying to update an ticket and I am trying to do that through Linux. Not sure whether it can be done or not, in the way of searching, I found one blogger sharing, where he/she was also trying to update something
He/She has used the below commands to update the URL.
curl -i -u "script-user:password" -X PUT -d "short_description=Update+me" https:/0005000972
I suspect he/she is trying to update the URL with "Update me".
Is that right? or else Can some one explain what that blogger tried to do??
This is a RESTful request. That is a request using standard HTTP methods (GET, POST, PUT, DELETE) to query a server.
-X PUT specifies the method used
-d "...." specifies the data send to the server
https://... (ill formed in your example) is of course the URL of the target server
Usually, the PUT method is used to replace an existing attribute/value on the server. As the concrete parameters and/or methods available are service dependent, I can only guess here that the intend is to update some attribute named short_description to store the value Update Me (URL encoded -- or more formally x-www-form-urlencoded)
Maybe you should first read a little bit more about those topics, and then, if necessary post an other question describing more in details both the target server and the goal you're trying to achieve on it.

How can I force a meteor app to make all HTTP calls through a proxy?

I'm trying to emulate curl through a proxy server. The meteor docs don't mention any proxy settings for HTTP.* methods.
Is there a meteor-specific solution? Right now I'm using ProxyChains.
Ideally I'd use a SOCKS proxy and only HTTP.* calls would go through it, but I'm open to all calls from the application going through any type of proxy.
Meteor 1.1 update
You can pass options directly to the npm request module via the npmRequestOptions parameter to HTTP.*. The functionality was enabled by this commit made after I filed an issue in 2013 (see below).
You no longer need to use the http-more package.
Old answer, pre-Meteor 1.1
One method would be pass a proxy parameter to HTTP.* calls, which use the request module, which supports proxies as an option.
proxy isn't a recognized option in the HTTP package, and I've filed a request to simply pass through unrecognized options. It was rejected by one of the Meteor core developers.
I'd rather people vote on that issue, asking for unknown options to be passed through instead of being ignored. In the meantime, I've created a package that does pass through options: http-more.
Here's a Meteor proxy package: https://npmjs.org/package/seafish-http-proxy-meteor
It's not available through atmosphere, but it is an npm package designed for meteor, which means it will be very easy to integrate.

Resources