I'm trying to pass an Authorization header through API Gateway into a Lambda function. I can see the key in the gateway logs. But, even after I transform the input with the standard script (see below), the Authorization head still doesn't make it to the Lambda function.
Any suggestions on what I'm missing?
API Log excerpt
Endpoint request headers:
{
X-Amz-Date=20220419T143450Z,
x-amzn-apigateway-api-id=?????????,
Accept=application/x-www-form-urlencoded,
User-Agent=AmazonAPIGateway_hhompg4,
Host=lambda.us-east-1.amazonaws.com,
X-Amz-Content-Sha256=??????????????????????????????????????????????????,
X-Amzn-Trace-Id=Root=1-????????-???????????????????,
x-amzn-lambda-integration-tag=abcd-4e32-1234-???????????????, Authorization=*********************************************************************************************************************************************************************************************************************************************************************************************************************************************70cc,
X-Amz-Source-Arn=arn:aws:execute-api:us-east-1:-----------------:asfd/test/POST/,
X-Amz-Security-Token=---------------------------------------// [TRUNCATED]
Method Execution / - POST - Integration Request Transformation script:
{
"method": "$context.httpMethod",
"body" : $input.json('$'),
"headers": {
#foreach($param in $input.params().header.keySet())
"$param": "$util.escapeJavaScript($input.params().header.get($param))"
#if($foreach.hasNext),#end
#end
}
}
event keys arriving to lambda function:
2022-04-19T14:29:34.457Z INFO Object.keys(event) [
'resource',
'path',
'httpMethod',
'headers',
'multiValueHeaders',
'queryStringParameters',
'multiValueQueryStringParameters',
'pathParameters',
'stageVariables',
'requestContext',
'body',
'isBase64Encoded'
]
Object.keys(event.headers)
[
'accept',
'accept-encoding',
'accept-language',
'cache-control',
'content-type',
'Host',
'origin',
'referer',
'sec-ch-ua',
'sec-ch-ua-mobile',
'sec-ch-ua-platform',
'sec-fetch-dest',
'sec-fetch-mode',
'sec-fetch-site',
'sec-fetch-user',
'upgrade-insecure-requests',
'User-Agent',
'X-Amzn-Trace-Id',
'X-Forwarded-For',
'X-Forwarded-Port',
'X-Forwarded-Proto'
]
I was able to see my request header (Authorization) in request headers but the same was not visible in the endpoint request headers. Found that you have to enable the 'Use HTTP Proxy integration' option while setting up the integration point.
Related
I'm new to larravel, and use laravel sanctum build an app, the session driver is cookie.
The laravel app is deployed behind caddy, I enabled caddy logs. The log format is json, I see that it contains each request headers, and cookie info is loged, so I wonder if I can identify user by the cookies, I try decode the cookie but failed, is there any method to identify user by the cookie?
this is log format
{
"level": "info",
"ts": 1648864255.073147,
"logger": "http.log.access.log5",
"msg": "handled request",
"request": {
...
"headers": {
"Accept": [
"application/json, text/plain, */*"
],
"X-Xsrf-Token": [
".....yJpdiI6Ijh6bjVZMXUvOFlkR3V1U....."
],
"Cookie": [
"XSRF-TOKEN=...eyJpdiI6Ijh6bjVZMXUvOFlkR3V1UE....9"
]
}
},
"resp_headers": {
"Set-Cookie": [
"XSRF-TOKEN=hbHVlIjoiOHN6L1BXa2N; expires=Sat, 02-Apr-2022 03:50:55 GMT; Max-Age=7200; ...",
"card_session=2IzTC9BbTEydW5NUDEvd016aVhlOWp; expires=Sat, 02-Apr-2022 03:50:55 GMT; Max-Age=7200; ...",
"hkx3q7J7TeLVf3hV9XSaDiwScSS7rUIPP7kcge7f=eyJpdiI6RzlOeEN5eUhxUVE4OUZpMkFmSmYSIsInRhZyI6IiJ9; expires=Sat, 02-Apr-2022 03:50:55 GMT; Max-Age=7200; ..."
],
..
}
}
``
I really wonder why you need to know the user from cookie.
But because you are new to laravel, I think you get it all wrong with using laravel sanctum as authenticator.
When using laravel sanctum, it will generate token to use as bearer token for your next API request. So here what you should do
login and get the user data and token
store the token in the front end and use it as bearer token for your next request.
logout and destroy the token.
The user data can be get easily in any place in the laravel backend using
Auth::user()
or maybe if you only need the user id then use
Auth:id()
don't forget to restrict your end point with authentication middleware if you don't want unauthenticated user to use your end point
My working set up
laravel app is on: http://localhost:8100
Vue app is on: http://localhost:5173
My env:
SANCTUM_STATEFUL_DOMAINS=localhost:5173
SESSION_DRIVER=cookie
SESSION_LIFETIME=120
SESSION_DOMAIN=.localhost
I'm trying to train a Form Recognizer using the browser API console (https://eastus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api/operations/TrainCustomModel/console). I've uploaded traning images to a container and created an SAS. The browser API console generate following HTTP request:
POST https://eastus.api.cognitive.microsoft.com/formrecognizer/v1.0-preview/custom/train?source=https://pythonimages.blob.core.windows.net/?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2020-01-22T00:23:33Z&st=2020-01-21T16:23:33Z&spr=https&sig=••••••••••••••••••••••••••••••••&prefix=images HTTP/1.1
Host: eastus.api.cognitive.microsoft.com
Content-Type: application/json
Ocp-Apim-Subscription-Key: ••••••••••••••••••••••••••••••••
{
"source": "string",
"sourceFilter": {
"prefix": "string",
"includeSubFolders": true
}
}
However, the answer I get back is
Transfer-Encoding: chunked
x-envoy-upstream-service-time: 4
apim-request-id: 5ad37aa2-e251-4b61-98ae-023930b47d27
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
x-content-type-options: nosniff
Date: Tue, 21 Jan 2020 16:25:03 GMT
Content-Type: application/json; charset=utf-8
{
"error": {
"code": "1004",
"message": "Dataset path must be relative to local input mount path '/input' if local data is referenced."
}
}
I don't understand why it seems to be looking for data locally. I've experimented with the SAS, e.g. including the container name (images) in the blob http address rather than as a query parameter, but no success so far.
I've also tried the Python/REST path (described here: https://learn.microsoft.com/en-gb/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract-v1), which results in a different error:
Response status code: 408
Response body: {'error': {'code': '1011', 'innerError': {'requestId': 'e7f9ef9f-97bc-4b6a-86f3-0b29c9591c87'}, 'message': 'The operation exceeded allowed time limit and was canceled. The common reasons are that the data source is too large or contains unsupported content. Please check that your request conforms to service limits and retry with redacted data source.'}}
For completeness, the code I use is as follows (key/signature *ed out:)
########### Python Form Recognizer Train #############
from requests import post as http_post
# Endpoint URL
base_url = r"https://markusformsrecognizer.cognitiveservices.azure.com/" + "/formrecognizer/v1.0-preview/custom"
source = r"https://pythonimages.blob.core.windows.net/images?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2020-01-22T15:37:26Z&st=2020-01-22T07:37:26Z&spr=https&sig=*********************************"
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '*********************************'
}
url = base_url + "/train"
body = {"source": source}
try:
resp = http_post(url = url, json = body, headers = headers)
print("Response status code: %d" % resp.status_code)
print("Response body: %s" % resp.json())
except Exception as e:
print(str(e))
For error code 1004 Please follow the below to get the Source path containing the training documents and pass as value to the source key.
{
"source": "string",
"sourceFilter": {
"prefix": "string",
"includeSubFolders": true
}
}
Replace with the Azure Blob storage container's shared access signature (SAS) URL. To retrieve the SAS URL, open the Microsoft Azure Storage Explorer, right-click your container, and select Get shared access signature.
Make sure the Read and List permissions are checked, and click Create.
Then copy the value in the URL section. It should have the form:
https://.blob.core.windows.net/container name?SAS value.
Please use the new Form Recognizer v2.0 release it is an async API and enables training on large data sets and analyzing large documents. https://aka.ms/form-recognizer/api
quick start - https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract
To get started with Form Recognizer please login to the Azure Portal using this link to create a Form Recognizer resource (for v2.0 (preview) please use West US 2 or West Europe regions).
try removing the string value from prefix property.
{
"source": "string",
"sourceFilter": {
"prefix": "",
"includeSubFolders": true
}
}
The Python Quick Start code for version 2.0 seems to be working, at least I don’t get any errors anymore. I’m now feeling slightly silly that I didn’t try this earlier. The API (web-browser) console, linked from the Quick Start page of the Form Recognizer seems automatically assume I want to use version 1.0 and there’s no way to change that (or perhaps I’ve just overseen something). Hence I assumed I’d been allocated a v1.0 trial and therefore that’s what I used when I tried the Python Quick Start the first time around.
Instead of using just the SAS URI in the "source" of Request parameter on the API POST call, use the complete string of the container followed by the SAS URI token.
For ex:
https://.blob.core.windows.net//
I have a flask based service hosted in Heroku. The endpoint of which is given as a fulfillment in dialogflow. Now I cannot figure out how to capture the request payload which dialogflow triggers everytime I request something.
I tried capturing and logging the same in heroku itself but that does not seem to be working.
The service code is as follows:
#app.route('/date/currentdate/<date>', methods = ['POST'])
def postJsonHandler():
print (request.is_json)
content = request.get_json()
logging.warning(content)
return 'JSON posted'
The json which i am getting is:
WARNING:root:{'responseId': 'c5115583-e9c5-497a-8a50-1ea07ab02dba-baaf0c1f', 'queryResult': {'queryText': 'send me the asap for 4568999', 'parameters': {'Dashboard': 'ASAP', 'number': 4568999.0}, 'allRequiredParamsPresent': True, 'fulfillmentMessages': [{'text': {'text': ['Hi I can definitely help you out with that.']}, 'platform': 'SKYPE'}, {'text': {'text': ['']}}], 'intent': {'name': 'replaced this', 'displayName': 'ASAP Dashboard'}, 'intentDetectionConfidence': 0.7012109, 'languageCode': 'en'}, 'originalDetectIntentRequest': {'payload': {}}, 'session': 'replaced this'}
There is diagnostic info section in the agent section which has got all the info related to request and response, it is sometimes not visible if there is pop up on the top of page, in my case it was the v2 getting outdated banner; on closing it the diagnostic info became visible.
so far I've managed to create two webhooks by using their official gem (https://github.com/bigcommerce/bigcommerce-api-ruby) with the following events:
store/order/statusUpdated
store/app/uninstalled
The destination URL is a localhost tunnel managed by ngrok (the https) version.
status_update_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/order/statusUpdated', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
uninstall_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/app/uninstalled', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
The webhooks seems to be active and correctly created as I can retrieve and list them.
Bigcommerce::Webhook.all(connection:connection)
I manually created an order in my store dashboard but no matter to which state or how many states I change it, no notification is fired. Am I missing something?
The exception that I'm seeing in the logs is:
ExceptionMessage: true is not a valid header value
The "is-active" flag should be sent as part of the request body--your headers, if you choose to include them, would be an arbitrary key value pair that you can check at runtime to verify the hook's origin.
Here's an example request body:
{
"scope": "store/order/*",
"headers": {
"X-Custom-Auth-Header": "{secret_auth_password}"
},
"destination": "https://app.example.com/orders",
"is_active": true
}
Hope this helps!
Hello I need some help with sending a PUT request to my ElasticSearch on AWS to create a snapshot in a S3 bucket, with POSTMAN.
I have created a S3 bucket called cb-search-es-backup.
I've created a role, and a policy for S3 (see:this post of mine for the steps I've taken).
REQUEST URL https://myelasticsearchendpoint.eu-west-1.es.amazonaws.com/
REQUEST METHOD: PUT
BODY : RAW / json
{
"type": "s3",
"settings": {
"bucket": "cb-search-es-backup", // my bucketname
"region": "eu-west-1", // region
"role_arn": "arn:aws:iam::12345676890:role/Role_ES_TO_S3" // my role arn
}
}
I've also tried the authorization type: 'AWS Signature', with access and secret key filled in.
It looks like you are not passing AWS credentials with this request.
There is a detailed guide how to make a Postman request with AWS authentication here: Use Postman to Call an API.
Your Postman window might look like this:
To do the same from python please check out Sample python client section of this documentation page, note that AWS4Auth object is created and it's passed as auth parameter to requests.put():
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
# Register repository
path = '_snapshot/my-snapshot-repo' # the Elasticsearch API endpoint
url = host + path
payload = {
...
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers)