CORS error with AWS api gateway and java lambda function - aws-lambda

Today I created one api gateway on aws and one java lambda function. Then finally integrated api gateway with lambda function.
So when I hit the api using postman then it returns the result which is basically a list of customer. Till now everything looks fine. Following is the
#Override
public TestResponse handleRequest(Request input, Context context) {
TestService testService = SingletonServiceManager.getInstance().getTestService();
TestListResponse response = (TestListResponse)productListService.executeRequest(input);
return response;
}
After executing it returns following output.
{
"status": 200,
"products": [
{
"name": "test1",
"code": "test1",
"status": true
},
{
"name": "test2",
"code": "test2",
"status": true
}
]
}
but when I started integrating this with api call with Angular from local machine it start throwing CORS issue. Angular client using it CORS setting to connect.
Can someone help me on this. Do I need to enable something special from lambda function.

You need to enable CORS in your API Gateway configuration.

Related

Testing Java lambda function via API Gateway: Unable to determine service/operation name to be authorized

I'm playing around with API Gateway. Basically, I have a simple java code that aims to return a greetings message:
public class Greetings implements RequestHandler<GreetingsRequest, String> {
//enable pretty print JSON output
Gson gson = new GsonBuilder().setPrettyPrinting().create();
public String handleRequest(GreetingsRequest input, Context context) {
LambdaLogger logger = context.getLogger();
System.out.println("Welcome to lambda function");
// log execution details
logger.log("ENVIRONMENT VARIABLES: " + gson.toJson(System.getenv()));
logger.log("CONTEXT: " + gson.toJson(context));
// process event
logger.log("EVENT: " + gson.toJson(input));
logger.log("EVENT TYPE: " + input.getClass().toString());
return "Hello " + input.getName();
}
}
I've attached to the lambda function a role with the following characteristics:
4 default policies (AmazonAPIGatewayInvokeFullAccess, CloudWatchFullAccess, AmazonAPIGatewayAdministrator, AWSLambdaBasicExecutionRole) and a custom one (lambda_execute).
Role's Trust Relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"apigateway.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
In relation to the custom policy "lambda_execute":
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "*"
}
]
}
In relation to the API Gateway:
The resource:
The method request:
The integration request:
When I am testing out the resource, the following message is sent out:
<AccessDeniedException>
<Message>Unable to determine service/operation name to be authorized</Message>
</AccessDeniedException>
Anyone could point me out what I'm missing or doing wrong? Tks so much in advance.
Two options to invoke Lambda function from Api Gateway REST Api.
Integration Type Lambda: just need to give Lambda function name.
Integration Type AWS Service: This method is also used to send events directly from Api Gateway to other aws services like Sns, Sqs, Kinesis, etc.
Question is about using second method of invoking lambda using AWS Service.
PathOverride proxies request from Api Gateway to different endpoint.
Full endpoint to invoke a Lambda function is https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/arn:aws:lambda:us-east-1:111122223333:function:my-function-name/invocations
First part https://lambda.us-east-1.amazonaws.com/ will be appended by Api Gateway, second part 2015-03-31/functions/arn:aws:lambda:us-east-1:111122223333:function:my-function-name/invocations should be given in path override.
if path override is incorrect, thats when we get Unable to determine service/operation name to be authorized Error.

Azure Logic App throwing 302 Redirect Error having Server=BIG IP in Response Header for HTTP

I am getting Redirect 302 error for HTTP Request in Logic App. I am calling OneIdentityServer to get access token. Then I am calling Rest API passing access token as Header for key Authorization. I am getting 302 Redirect error in response with headers information like Server = BIG IP, Location= /my.policy
The same above request when triggered through Postman or SOAPUI is working fine, I am getting successful response. But the same is failing in Azure Logic App.
I have also implemented the above scenario in function app as well. It is working file when I run the function app code from visual studio using Postman. But when I test the same function app after publishing it to Azure portal, it is giving same error.
It seems like I have the same issue as you. I found one-way that did not work for me but maybe you could give it a shot if it fits your needs?
The solution that I found in a blogpost was first to add the action "Switch" to the logic app flow and then configure Switch to run after the HTTP is both successful and has failed.
Secondly, the Switch action should trigger on the output of the statuscode from the HTTP request.
If the statuscode equals 302 you should make another HTTP request but with URI being the output of the location header from the first HTTP request. This made my logic app result in statuscode 200 but the response for my logic app was that I needed to login to get access to the API.
But maybe it could be worth giving this solution a shot for your logic app?
Here's the link of the blogpost if you need deeper instructions: http://www.alessandromoura.com.br/2018/11/21/dealing-with-http-302-in-logic-apps/
Do you still have issues with this? Here is a screenshot of my http action: HTTP Action that is working.
I have put my URL in an variable since it changing for each pagination. I also found out that to use the authentication token from the first HTTP request I needed to parse the body to be able to access the token, here is the schema I used to parse the HTTP body from the request where you get access token:
{
"properties": {
"access_token": {
"type": "string"
},
"expires_in": {
"type": "string"
},
"expires_on": {
"type": "string"
},
"ext_expires_in": {
"type": "string"
},
"not_before": {
"type": "string"
},
"resource": {
"type": "string"
},
"token_type": {
"type": "string"
}
},
"type": "object"
}

How do I post a test Kinesis event from Postman to a local Lambda function running on serverless?

Sorry, wasn't sure how to make the question itself brief enough...
I can post data from Postman to my local Lambda function. The issue is that when running locally, I have use this line of code...
event = JSON.parse(event.body);
...so that I can do this...
event.Records.forEach(function(record)
{
// do some stuff
}
But when I deploy the function to AWS, parsing event.body is unnecessary. In fact it throws an error.
I was assuming that there is something different about the JSON (or other aspects of the request) that I'm posting from Postman to my local app when compared to what Kinesis actually sends. But the JSON blob I'm posting locally was logged directly from Lambda on AWS to Cloudwatch.
I'm missing something.
TBH, this only matters because having to comment out that line as a step in the deployment process is annoying and error-prone.
Here's the JSON (names have been changed to protect the innocent):
{
"Records": [
{
"kinesis": {
"kinesisSchemaVersion": "1.0",
"partitionKey": "Thursday, 11 April 2019",
"sequenceNumber": "49594660145138471912435706107712688932829223550684495922",
"data": "some base 64 stuff",
"approximateArrivalTimestamp": 1555045874.83
},
"eventSource": "aws:kinesis",
"eventVersion": "1.0",
"eventID": "shardId-000000000003:1234123412341234123412341234123412341234123412341234",
"eventName": "aws:kinesis:record",
"invokeIdentityArn": "arn:aws:iam::1234123412341234:role/lambda-kinesis-role",
"awsRegion": "us-west-2",
"eventSourceARN": "arn:aws:kinesis:us-west-2:1234123412341234:stream/front-end-requests"
}
]
}

Test Resolver in AWS AppSync w/ API Key?

Currently AWS AppSync provides an option to add test context to test your resolver to make sure everything is correct. However, because I am using API Key for authentication, I'm not sure of a way to set this in the request mapping template so that the test context can run and I can test the validity of my API (especially since this is the only auth that doesn't have an identity section in the test context)? Can anyone help?
You are correct in the fact that API Key Authorization mode does not populate the identity, even when you are calling your API from a client.
However, you can still add an identity object in your test context. To do this, you need to:
Get the authorization mode you will be using in the future (IAM, Cognito, OIDC).
Find the fields that authorization mode provides in the ctx.identity. You can find that here: Resolver Context Reference
Add those fields to your test context. For example, IAM test context might look like this:
{
"identity": {
"accountId": "my aws account",
"cognitoIdentityPoolId": "string",
"cognitoIdentityId": "string",
"sourceIp": ["string"],
"username": "string",
"userArn": "string"
},
"arguments": {},
"source": {
"lambda": "Hello, world!",
"testCtx": "Hello, world!"
},
"result": "Hello, world!"
}
The request mapping template could look like this:
{
"account: "$ctx.identity.accountId"
}
and the evaluated request mapping template would look like this when your test context is run:
{
"account: "my aws account"
}
Note: You may also just want to switch your API to the authorization mode you plan on using, and then try queries as a logged-in user.

Google Logging API - What service name to use when writing entries from non-Google application?

I am trying to use Google Cloud Logging API to write log entries from a web application I'm developing (happens to be .net).
To do this, I must use the logging.projects.logs.entries.write request. This request dictates that I provide a serviceName argument:
{
"entries": [
{
"textPayload": "test",
"metadata":
{
"serviceName": "compute.googleapis.com"
"projectId": "...",
"region": "us-central1",
"zone": "us-central1-a",
"severity": "DEFAULT",
"timestamp": "2015-01-13T19:17:01Z",
"userId": "",
}
}]
}
Unless I specify "compute.googleapis.com" as the serviceName I get an error 400 response:
{
"error":
{
"code": 400,
"message": "Unsupported service specified",
"status": "INVALID_ARGUMENT"
}
}
For now using "compute.googleapis.com" seems to work but I'm asking - what service name should I give, given that I'm not using Google Compute Engine or Google App Engine here?
The Cloud Logging API currently only officially supports Google resources, so the best course of action is to continue to use "compute.googleapis.com" as the service and supply the labels "compute.googleapis.com/resource_type" and "compute.googleapis.com/resource_id", which are used for indexing and visible in the UI drop-downs.
We also currently permit the service name "custom.googleapis.com" with index labels "custom.googleapis.com/primary_key" and "custom.googleapis.com/secondary_key" but that is not officially supported and subject to change in a future release.

Resources