Encountering MappableException after upgrading helidon and jersey - jersey

After upgrading helidon to 2.5.4 and jersey to 2.35, I am getting the following exception in one of the API. With previous version of helidon(2.0.2) and jersey(2.29.1), it used to work fine. Can you please help to resolve this issue.
{"level":"ERROR","logger":"org.glassfish.jersey.server.ServerRuntime$Responder","thread":"jersey-thread-7","ts":1671639547627,"x-request-id":"","msg":"An I/O error has occurred while writing a response message entity to the container output stream.:\norg.glassfish.jersey.server.internal.process.MappableException: java.io.IOException: Bad news: the stream has been closed\n\tat org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:67)\n\tat org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:139)\n\tat org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1116)\n\tat org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:635)\n\tat org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:373)\n\tat org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:363)\n\tat org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:258)\n\tat org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)\n\tat org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:292)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:274)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:244)\n\tat org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)\n\tat org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)\n\tat org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:684)\n\tat io.helidon.webserver.jersey.JerseySupport$JerseyHandler.lambda$doAccept$4(JerseySupport.java:335)\n\tat io.helidon.common.context.Contexts.runInContext(Contexts.java:117)\n\tat io.helidon.common.context.ContextAwareExecutorImpl.lambda$wrap$7(ContextAwareExecutorImpl.java:154)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: java.io.IOException: Bad news: the stream has been closed\n\tat io.helidon.webserver.jersey.ResponseWriter$DataChunkOutputStream.awaitRequest(ResponseWriter.java:327)\n\tat io.helidon.webserver.jersey.ResponseWriter$DataChunkOutputStream.write(ResponseWriter.java:186)\n\tat org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:200)\n\tat org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:276)\n\tat com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2171)\n\tat com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeBytes(UTF8JsonGenerator.java:1260)\n\tat com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeFieldName(UTF8JsonGenerator.java:284)\n\tat com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:726)\n\tat com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:774)\n\tat com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:178)
I also see the below warning whenever the mappableexception is encountered.
{"level":"WARN","logger":"io.helidon.webserver.BareResponseImpl","thread":"jersey-thread-1","ts":1671689369384,"x-request-id":"","msg":"Entity was requested and not fully consumed before a response is sent. This is not supported. Connection will be closed. Please fix your route for POST /demo/api/v2/test/","podname":"deployment-demo-batch-0"}
Below is the signature of the api which errored,
#post
#path("test")
#produces(MediaType.APPLICATION_JSON + "; charset=UTF-8")
#consumes(MediaType.APPLICATION_JSON + "; charset=UTF-8")
#timed(name="app_kpi",
absolute = true,
reusable = true,
unit=MetricUnits.NANOSECONDS,displayName="test",
tags = {"api=test"})
#tag(name = "Version2")
#APIResponse(responseCode = "200", description = "Success", content = #content(schema = #Schema(type=SchemaType.ARRAY, implementation = OutputRecord.class)))
#APIResponse(responseCode = "403", description = "Forbidden")
#APIResponse(responseCode = "401", description = "Unauthorized")
#APIResponse(responseCode = "500", description = "An unexpected error occurred during the request.")
#authenticated
public Vector<OutputRecords> test(#requestbody(description = "An array of input records to be processed.", required = true,
content = #content(schema = #Schema(type = SchemaType.ARRAY, implementation = InputRecord.class))) Vector <InputRecords> inputRecords)

This has been reported as issue 5775 for Helidon.

Related

google-api-nodejs-client - Service Account credentials authentication issues

I am trying to use the google-api-nodejs library to manage some resources in the google Campaign Manager API.
I have confirmed that we currently have a project configured, and that this project has the google Campaign Manager API enabled (see screenshot at the bottom).
I have tried several ways of authenticating myself (particularly API keys, OAuth2, and Service account credentials). This question will focus on using a Service Account for authentication purposes.
Now, I have generated a new service account keyfile (see screenshot at the bottom)), and I configured my code as follows, following the service-account-credentials section of the library's repo. I've also extended the auth scope to include the necessary scope according to this endpoint API docs
import { assert } from "chai";
import { google } from "googleapis";
it("can query userProfiles using service account keyfile", async () => {
try {
const auth = new google.auth.GoogleAuth({
keyFile:
"/full-path-to/credentials-service-account.json",
scopes: [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/dfatrafficking",
"https://www.googleapis.com/auth/ddmconversions",
"https://www.googleapis.com/auth/dfareporting",
],
});
const authClient = await auth.getClient();
// set auth as a global default
google.options({
auth: authClient,
});
const df = google.dfareporting("v3.5");
const res = await df.userProfiles.list({});
console.log("res: ", res);
assert(true);
} catch (e) {
console.error("error: ", e);
assert(false);
}
});
This results in the following error:
{
"code": 403,
"errors": [
{
"message": "Version v3.5 is no longer supported. Please upgrade to the latest version of the API.",
"domain": "global",
"reason": "forbidden"
}
]
}
This is an interesting error, because v3.5 is the latest version of that API (as of 14 April 2022) (This page shows the deprecation schedule: https://developers.google.com/doubleclick-advertisers/deprecation. Notice that v3.3 and v3.4 are deprecated, while v3.5 is not.)
In any case, using a different version of the dfareporting API still result in error:
// error thrown: "Version v3.5 is no longer supported. Please upgrade to the latest version of the API."
const df = google.dfareporting("v3.5");
// error thrown: "Version v3.4 is no longer supported. Please upgrade to the latest version of the API."
const df = google.dfareporting("v3.4");
// error thrown: 404 "The requested URL <code>/dfareporting/v3.3/userprofiles</code> was not found on this server"
const df = google.dfareporting("v3.3");
// Note 1: There are no other versions available
// Note 2: It is not possible to leave the version blank
const df = google.dfareporting();
// results in typescript error: "An argument for 'version' was not provided."
I also tried to query the floodlightActivities API, which failed with an authentication error.
// const res = await df.userProfiles.list({});
const res = await df.floodlightActivities.list({
profileId: "7474579",
});
This, in it's turn, results in the following error:
{
"code": 401,
"errors": [
{
"message": "1075 : Failed to authenticate. Google account can not access the user profile/account requested.",
"domain": "global",
"reason": "authError",
"location": "Authorization",
"locationType": "header"
}
]
}
Now, my question is:
am I doing something wrong while trying to authenticate using the service account credentials?
Or, is it possible that these endpoints do not support service-account-credentials?
Or, is something else going wrong here?

Unable to get Keycloak client initiated client account linking to work

The request to start the client iniated account linking fails.
The console is showing a WARN of type: CLIENT_INITIATED_ACCOUNT_LINKING_ERROR with error: invalid_token.
The url was generated as described here: https://www.keycloak.org/docs/latest/server_development/#client-initiated-account-linking, by php backend system.
Also making sure to use UTF8 encoding when generating the hash
All prerequisites as describe it the section have been fulfilled.
Im' using Keycloak 15.0.2 and Laravel with Socialite to authenticate users.
This is how the hash is generated.
$keycloack_user = Socialite::driver('keycloak')->user();
$bearerToken = $keycloack_user->token;
$tokenParts = explode(".", $bearerToken);
$tokenHeader = base64_decode($tokenParts[0]);
$tokenPayload = base64_decode($tokenParts[1]);
$jwtHeader = json_decode($tokenHeader);
$jwtPayload = json_decode($tokenPayload);
$client_id = $jwtPayload->azp;
$host = $jwtPayload->iss;
$session_state = $jwtPayload->session_state;
$nonce = Str::random(20);
$provider = "google";
$input = $nonce . $session_state . $client_id . $provider;
$utf8encoded = utf8_encode($input);
$hashed = hash('sha256', $utf8encoded);
$encoded = rtrim(strtr(base64_encode($hashed), '+/', '-_'), '=');
Then the linking url is constructed as shown below:
$redirect_uri = urlencode(...);
$full_url = $host . "/broker/". $provider ."/link?client_id=". $client_id ."&redirect_uri=". $redirect_uri ."&nonce=". $nonce ."&hash=" . $encoded;
I'm currently testing a my local machine, without using https for any of the applications. Loging in works fine and when inspecting the JWT token, the needed role mappings are present:
"account": {
"roles": [
"manage-account",
"manage-account-links",
"view-profile"
]
}
But when accessing the url it says "Invalid request" and the Keycloak console indicates the token is invalid.
Update: Solution was to return the result of the hash method as raw binary data
$hashed = hash('sha256', $utf8encoded, true);
I had to work on the same task lately but with the client implemented in JavaScript. I was also stuck for quite a while till I realized how uncommonly keycloak is expecting the encoded hash value. You need to consider following two points:
Encode the hash string into hexadecimal before base64 conversion
Replace + by - and / by _. Besides that remove trailing = symbols
Below you find a working snippet written in JS:
import sjcl from "sjcl";
hexToBase64(hexstring) {
return btoa(hexstring.match(/\w{2}/g).map(function(a) {
return String.fromCharCode(parseInt(a, 16));
}).join(""));
},
// Assume nonce, session_state, clientId, provider to be given
var data = nonce + session_state + clientId + provider;
var myBitArray = sjcl.hash.sha256.hash(data)
var hashedData = sjcl.codec.hex.fromBits(myBitArray)
var base64HashedData = this.hexToBase64(HashedData)
base64HashedData = base64HashedData.replaceAll('+','-').replaceAll('/','_').replaceAll('=','')
base64HashedData is then what you need to pass as hash query parameter to the link endpoint of keycloak.

How can I fix the error "The [standard] token filter has been removed"?

I am trying to upgrade a project from ElasticSearch 2.3 with NEST version 2.5.8 to ElasticSearch 7.9 with NEST 7.11.1. When I try to create the index I get the error:
# OriginalException: Elasticsearch.Net.ElasticsearchClientException: The remote server returned an error: (400) Bad Request.. Call: Status code 400 from: PUT /partsearch.01. ServerError: Type: illegal_argument_exception Reason: "failed to build synonyms" CausedBy: "Type: parse_exception Reason: "Invalid synonym rule at line 1" CausedBy: "Type: illegal_argument_exception Reason: "The [standard] token filter has been removed."""
The code that is attempting to create the index when this error occurs is:
protected internal CreateIndexResponse CreateIndex(string name)
{
var indicesOperationResponse = this.elasticClientProxy.CreateIndex(
name, c => c
.Settings(
s => s
.NumberOfReplicas(this.numberOfReplicas)
.NumberOfShards(this.numberOfShards)
.Setting("index.max_result_window", this.maxResultWindow)
.Analysis(
ad => ad
.CharFilters(this.RegisterCharFilters)
.Tokenizers(this.RegisterTokenizers)
.TokenFilters(this.RegisterTokenFilters)
.Analyzers(this.RegisterAnalyzers)))
.Map<T>(this.Map)
.Map<IndexMetaData>(this.MapIndexMetaData));
return indicesOperationResponse;
}
The implementation of the RegisterTokenFilters is:
protected internal override TokenFiltersDescriptor RegisterTokenFilters(TokenFiltersDescriptor descriptor)
{
return descriptor.UserDefined(TokenFilter.NormalizeNumberSeparator.DisplayName, TokenFilter.NormalizeNumberSeparator.Filter)
.UserDefined(TokenFilter.CustomStopWordFilter.DisplayName, TokenFilter.CustomStopWordFilter.Filter)
.UserDefined(TokenFilter.StripNumberUnit.DisplayName, TokenFilter.StripNumberUnit.Filter)
.UserDefined(TokenFilter.StripEndingPunctuation.DisplayName, TokenFilter.StripEndingPunctuation.Filter)
.UserDefined(TokenFilter.StripCommaFromNumber.DisplayName, TokenFilter.StripCommaFromNumber.Filter)
.UserDefined(TokenFilter.EnglishStemmer.DisplayName, TokenFilter.EnglishStemmer.Filter)
.UserDefined(TokenFilter.EnglishPossessiveStemmer.DisplayName, TokenFilter.EnglishPossessiveStemmer.Filter)
.UserDefined(TokenFilter.PatternFilter.DisplayName, TokenFilter.PatternFilter.Filter)
.UserDefined(TokenFilter.SynonymFilter.DisplayName, TokenFilter.SynonymFilter.Filter)
.UserDefined(TokenFilter.StripLeadingCharNoise.DisplayName, TokenFilter.StripLeadingCharNoise.Filter)
.UserDefined(TokenFilter.NumericSynonymFilter.DisplayName, TokenFilter.NumericSynonymFilter.Filter)
.UserDefined(TokenFilter.StemmerExclusionFilter.DisplayName, TokenFilter.StemmerExclusionFilter.Filter)
.UserDefined(TokenFilter.AsciiFoldingTokenFilter.DisplayName, TokenFilter.AsciiFoldingTokenFilter.Filter)
.UserDefined(TokenFilter.DashWordsSynonymFilter.DisplayName, TokenFilter.DashWordsSynonymFilter.Filter)
.UserDefined(TokenFilter.DashSplitTokenFilter.DisplayName, TokenFilter.DashSplitTokenFilter.Filter);
}
I wanted to find and remove the Standard token filter based on answers I found to similar errors but I don't see it being used here.
How can I troubleshoot and resolve this issue?
The method RegisterAnalyzers made a call that led to this code:
private static AnalyzerBase CustomDescriptionAnalyzer()
{
var customAnalyzer = new CustomAnalyzer();
customAnalyzer.CharFilter = new List<string>
{
CharacterFilter.HtmlStrip.DisplayName,
CharacterFilter.UniCodeFilter.DisplayName
};
customAnalyzer.Tokenizer = Tokenizer.DescriptionTokenizer.DisplayName;
customAnalyzer.Filter = new List<string>
{
TokenFilter.Standard.DisplayName,
TokenFilter.Lowercase.DisplayName,
TokenFilter.StripLeadingCharNoise.DisplayName,
TokenFilter.PatternFilter.DisplayName,
TokenFilter.StripLeadingCharNoise.DisplayName,
TokenFilter.NormalizeNumberSeparator.DisplayName,
I removed the line TokenFilter.Standard.DisplayName from the customerAnalyzer.Filter list and now I don't get the error Type: parse_exception Reason: "Invalid synonym rule at line 1" CausedBy: "Type: illegal_argument_exception Reason: "The [standard] token filter has been removed
See also
Breaking changes in 7.0
The [standard] token filter has been removed #175
Standard token filter removal causes exceptions after upgrade #50734

Unable to map the error getting from the application deployed using lambda function

I am having springboot application deployed using a lambda function. Please find the below sample.
Controller
#RequestMapping(method = RequestMethod.GET, value = "/bugnlow/findByRegionId/{regionId}", produces = "application/json")
public ResponseEntity<List<Bunglow>> findAllBunglowsByRegionId(#PathVariable int regionId, #RequestParam int page, #RequestParam int size) {
Page<Bunglow> bunglows = bunglowsService.findAllBunglowsByRegionId(regionId, page, size);
if (bunglows.getContent().size() == 0){
return ResponseEntity.notFound().build();
}
return ResponseEntity.ok(bunglows.getContent());
}
Service
if the "regionid" is invalid, I am throwing a runtime exception that contains message "region id is invalid".
throw new RuntimeException(Constant.INVALID_REGION_ID);
I am getting the below response when testing it locally by sending the invalid region id.
[1]{
"timestamp": 1519577956092,
"status": 500,
"error": "Internal Server Error",
"exception": "java.lang.RuntimeException",
"message": "Region Id is invalid",
"path": "/***/bugnlow/findByRegionId/343333"
}
I deployed above application AWS using lambda function. When I send the request from the AWS API gateway to the deployed application I am getting the below error with Amazon response headers.
[2] Request: /bugnlow/findByRegionId/342324?page=0&size=5 Status: 500 Latency: 166 ms Response Body
{ "message": "Internal server error" }
In the particular endpoint, integration responses have already configured for Error 500. But didn't use a template configuring the content-type as application/json.
I able to get the localized error by setting it in the controller class with
ResponseEntity<?>
But then the List Bunglow not display as the example response value in Swagger UI.
I need to get exact response[1] from the AWS console. How to do it.
Instead of error 500, how can I send the "Region id is invalid" with the Http status 400 (bad request).
It's a great help, if someone can help me on this.
Thanks
I able to resolve my problem by creating a class with
#ControllerAdvice
and handle the Exception using
#ExceptionHandler
Each point I need to validate and respond the error, I created an custom Exception "BunglowCustomExceptionResponse" and catch the exception in the Advice class "BunglowControllerAdvice". Then send the response with the exception message with bad request response as below.
#ControllerAdvice
public class BunglowControllerAdvice {
#ExceptionHandler
public ResponseEntity handleCustomBunglowException(Exception e){
logger.info("***Exception occurred :" + e.getLocalizedMessage());
return new ResponseEntity<BunglowCustomExceptionResponse>(new
BunglowCustomExceptionResponse(e.getLocalizedMessage()), HttpStatus.BAD_REQUEST);
}
}
Then, I able to get the expected response similar to below with bad request status code 400.
{"responseMessage": "error message"}
I don't know this is the best way but able to resolve my problem. Anyway, thanks a lot for your time who viewed this and tried to help me.

Why do I get 403 forbidden error while posting a json to elasticsearch endpoint on AWS?

I am posting a json to AWS elasticsearch,using a java lambda function.
public Object handleRequest(DynamodbEvent dynamodbEvent, Context context) {
//code to general the json document
AmazonDynamoDBClient amazonDynamoDBClient = new AmazonDynamoDBClient();
List<DynamodbEvent.DynamodbStreamRecord> dynamodbStreamRecordlist = dynamodbEvent.getRecords();
if (!dynamodbStreamRecordlist.isEmpty()) {
DynamodbEvent.DynamodbStreamRecord record = dynamodbStreamRecordlist.get(0);
if(record.getEventSource().equalsIgnoreCase("aws:dynamodb"))
tableName = getTableNameFromARN(record.getEventSourceARN());
}
LaneAnnotation laneAnnotation = new LaneAnnotation();
ScanRequest scanRequest = new ScanRequest().withTableName(tableName);
ScanResult result = amazonDynamoDBClient.scan(scanRequest);
List<Lines> linesFinalList = new ArrayList<Lines>();
if(result != null) {
for (Map<String, AttributeValue> item : result.getItems()) {
//code for looping through the table items and generating a json object for the elastic search model
}
//Code to post the json below -
RestTemplate restTemplate = new RestTemplate();
SimpleClientHttpRequestFactory clientHttpRequestFactory = (SimpleClientHttpRequestFactory)restTemplate.getRequestFactory();
clientHttpRequestFactory.setConnectTimeout(10000);
clientHttpRequestFactory.setReadTimeout(10000);
HttpEntity<String> entity = new HttpEntity<String>(<json goes here>, headers);
try{
restTemplate.exchange(endpoint, HttpMethod.POST, entity, String.class);
}catch(Exception e){
e.printStackTrace();
}
}
However, I see the following error when I test my AWS lambda function -
org.springframework.web.client.HttpClientErrorException: 403 Forbidden
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:91)
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:700)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:653)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:613)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:531)
at com.here.aws.LambdaApplication.handleRequest(LambdaApplication.java:166)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at lambdainternal.EventHandlerLoader$PojoMethodRequestHandler.handleRequest(EventHandlerLoader.java:456)
at lambdainternal.EventHandlerLoader$PojoHandlerAsStreamHandler.handleRequest(EventHandlerLoader.java:375)
at lambdainternal.EventHandlerLoader$2.call(EventHandlerLoader.java:1139)
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:285)
at lambdainternal.AWSLambda.<clinit>(AWSLambda.java:57)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at lambdainternal.LambdaRTEntry.main(LambdaRTEntry.java:94)
I even modified the access policy and added my IP address.
Have others faced this too? How did you resolve it?>
Any help will be appreciated.
EDIT1: I am now trying to incorporate signing of the request as is mentioned here - https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
Will report back if it goes well.
EDIT2:
Here's the second way of sending a request that I tried referring to the link above-
#Override
public Object handleRequest(DynamodbEvent dynamodbEvent, Context context) {
AmazonDynamoDBClient amazonDynamoDBClient = new AmazonDynamoDBClient();
List<DynamodbEvent.DynamodbStreamRecord> dynamodbStreamRecordlist = dynamodbEvent.getRecords();
if (!dynamodbStreamRecordlist.isEmpty()) {
DynamodbEvent.DynamodbStreamRecord record = dynamodbStreamRecordlist.get(0);
if(record.getEventSource().equalsIgnoreCase("aws:dynamodb"))
tableName = getTableNameFromARN(record.getEventSourceARN());
}
LaneAnnotation laneAnnotation = new LaneAnnotation();
ScanRequest scanRequest = new ScanRequest().withTableName(tableName);
ScanResult result = amazonDynamoDBClient.scan(scanRequest);
List<Lines> linesFinalList = new ArrayList<Lines>();
if(result != null) {
for (Map<String, AttributeValue> item : result.getItems()) {
//Generate the json object that needs to be sent in the request
}
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON_UTF8);
Request<?> request = new DefaultRequest<Void>(SERVICE_NAME);
request.setContent(new ByteArrayInputStream(elasticSearchModel.toString().getBytes()));
request.setEndpoint(URI.create(endpoint));
request.setHttpMethod(HttpMethodName.POST);
AWS4Signer signer = new AWS4Signer();
signer.setServiceName(SERVICE_NAME);
signer.setRegionName(Regions.US_EAST_1.getName());
AWSCredentialsProvider credsProvider =
new DefaultAWSCredentialsProviderChain();
AWSCredentials creds = credsProvider.getCredentials();
// Sign request with supplied creds
signer.sign(request, creds);
log.info("Request signed");
ExecutionContext executionContext = new ExecutionContext(true);
ClientConfiguration clientConfiguration = new ClientConfiguration();
AmazonHttpClient client = new AmazonHttpClient(clientConfiguration);
MyHttpResponseHandler<Void> responseHandler = new MyHttpResponseHandler<Void>();
MyErrorHandler errorHandler = new MyErrorHandler();
Response<Void> response =
client.execute(request, responseHandler, errorHandler, executionContext);
return dynamodbEvent;
}
However, I get the following error -
Check the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
The Canonical String for this request should have been
'GET
/
host:somehostname-XXXXXXXXXXXXXXXX.us-east-1.es.amazonaws.com
x-amz-date:20170130T105736Z
x-amz-security-token:FQoDYXdzEG4aDJJ4ryjXXXXXXXXXXXXXXXX/auMHooYENY6YXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
host;x-amz-date;x-amz-security-token
e3b0c4429XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
The String-to-Sign should have been
'AWS4-HMAC-SHA256
20170130T105736Z
20170130/us-east-1/es/aws4_request
9a5b4c92ec121c333f8cdXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
"}"
10:57:36.818 [main] DEBUG org.apache.http.headers - http-outgoing-1 << HTTP/1.1 403 Forbidden
AWS Elastic Search has a security gateway that provides authentication. The authentication options are configured in the AWS Elastic Search console.
You are receiving a 403 authentication error because your AWS Elastic Search access policy does not allow the IP for the NAT Gateway that your Lambda uses.
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies
Here is an access policy template that you can use for IP based authentication.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:YOUR-AWS-ACCOUNT-ID:domain/YOUR-ELASTICSEARCH-DOMAIN-NAME/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"YOUR-NAT-GATEWAY-PUBLIC-IP/32"
]
}
}
}
]
}

Resources