I'm unable to index a document in the AWS-hosted Elasticsearch cluster using signed requests.
Infrastructure setup
Elasticsearch version: 7.4
Access policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:<RESOURCE>/*"
}
]
}
Code
The following code loads the client libraries using version 7.6. I have also downgraded them to match the cluster version but with no effect.
build.gradle
// ...
implementation("org.springframework.data:spring-data-elasticsearch")
implementation("org.elasticsearch:elasticsearch")
implementation("org.elasticsearch.client:elasticsearch-rest-high-level-client")
// ...
The client configuration definition. The environment variables like AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_PROFILE are filled.
#Configuration
public class ElasticsearchClientConfig extends AbstractElasticsearchConfiguration {
#Value("${elasticsearch.host}")
private String elasticsearchHost;
#Value("${elasticsearch.port}")
private int elasticsearchPort;
#Override
#Bean
public RestHighLevelClient elasticsearchClient() {
var SERVICE_NAME = "es";
var REGION = "us-east-1";
var defaultCP = new DefaultAWSCredentialsProviderChain();
AWS4Signer signer = new AWS4Signer();
signer.setServiceName(SERVICE_NAME);
signer.setRegionName(REGION);
HttpRequestInterceptor interceptor = new AWSRequestSigningApacheInterceptor
(SERVICE_NAME, signer, defaultCP);
RestClientBuilder restClientBuilder = RestClient
.builder(HttpHost.create(elasticsearchHost))
.setHttpClientConfigCallback(hacb -> hacb.addInterceptorLast(interceptor));
return new RestHighLevelClient(restClientBuilder);
}
}
Where the AWSRequestSigningApacheInterceptor is taken from here.
So far so good. When the application loads it's accessing the cluster and manages to create relevant indices correctly.
Problem
There problem is when performing save() operation from Spring Data repository. There are two requests made to ES
#Override
public <S extends T> S save(S entity) {
Assert.notNull(entity, "Cannot save 'null' entity.");
operations.save(entity, getIndexCoordinates());
operations.indexOps(entity.getClass()).refresh();
return entity;
}
Looking at the logs the first one succeeds. The following error ends the second call
org.elasticsearch.client.ResponseException: method [POST], host [HOST], URI [/asset/_refresh?ignore_throttled=false&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true], status line [HTTP/1.1 403 Forbidden]
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."}
Looking at more detailed logs for both operations
Call for saving (ends with 200 status code):
com.amazonaws.auth.AWS4Signer : AWS4 Canonical Request: '"PUT
/asset/_doc/2
timeout=1m
content-length:128
content-type:application/json
host:<HOST>
user-agent:Apache-HttpAsyncClient/4.1.4 (Java/11.0.2)
x-amz-date:20200715T110349Z
content-length;content-type;host;user-agent;x-amz-date
55c1faf282ca0da145667bf7632f667349dbe30ed1edc64439cec2e8d463e176"
2020-07-15 13:03:49.240 DEBUG 3942 --- [nio-8080-exec-1] com.amazonaws.auth.AWS4Signer : AWS4 String to Sign: '"AWS4-HMAC-SHA256
20200715T110349Z
20200715/us-east-1/es/aws4_request
76b6547ad98145ef7ad514baac4ce67fa885bd56073e9855757ade19e28f6fec"
Call for refreshing (ends with 403 status code):
com.amazonaws.auth.AWS4Signer : AWS4 Canonical Request: '"POST
/asset/_refresh
host:<HOST>
user-agent:Apache-HttpAsyncClient/4.1.4 (Java/11.0.2)
x-amz-date:20200715T110349Z
host;user-agent;x-amz-date
bbe4763d6a0252c6e955bcc4884e15035479910b02395548dbb16bcbad1ddf95"
2020-07-15 13:03:49.446 DEBUG 3942 --- [nio-8080-exec-1] com.amazonaws.auth.AWS4Signer : AWS4 String to Sign: '"AWS4-HMAC-SHA256
20200715T110349Z
20200715/us-east-1/es/aws4_request
189b39cf0475734e29c7f9cd5fd845fc95f73c95151a3b6f6d430b95f6bee47e"
When indexing documents directly using lower-level clients everything works fine. I suspect that signature calculation behaves incorrectly for subsequent API calls.
I had a same issue, in my case i'm using AWSRequestSigningApacheInterceptor, and i had old version. after upgrade to latest version, it's fixed.
Related
I have a simple API function to upload a file similar to:
#PostMapping(value = "/documents",
consumes = {MediaType.MULTIPART_FORM_DATA_VALUE})
public Mono<ResponseEntity<String>> uploadDocument(#RequestPart Mono<FilePart> file){
return storeDocumentService
.upload(file)
.map(fileLocation->ResponseEntity.ok(fileLocation))
}
The code works ok and uploads the file. The problem comes when I want to make the response a bit better by returning the link to the uploaded file. For this I want to use HATEOAS 'org.springframework.boot:spring-boot-starter-hateoas'. As soon as I add the dependency 'org.springframework.boot:spring-boot-starter-hateoas' to my 'build.gradle' the endpoint stops working and I get a response:
{
"timestamp": "2023-02-20T04:28:10.620+00:00",
"status": 415,
"error": "Unsupported Media Type",
"path": "/documents"
}
and also I get in the logs:
2023-02-20T05:28:10.618+01:00 WARN 2993 --- [nio-8080-exec-4] .w.s.m.s.DefaultHandlerExceptionResolver : Resolved [org.springframework.web.HttpMediaTypeNotSupportedException: Content-Type 'application/pdf' is not supported]
It is important to point out that I upload a ".pdf" file with a header "Content-Type:multipart/form-data". And most important the only change in the working code and not working code is that i just add the dependency for HATEOAS 'org.springframework.boot:spring-boot-starter-hateoas'
For Uploading File We can easily use the type MultiPartFile , This will handles all the types of files and we can easily retrive the fileInputStream(data) from it.
The following code may helps you!..
#PostMapping("uploadExcelData")
public ResponseEntity<?> uploadExcelData(#RequestParam MultipartFile file) throws IOException {
List<...> dataList = fileHandling.convertFileAsJson(file);
if (!dataList.isEmpty()) {
return ....
} else {
return ResponseEntity.ok("No Records found !!");
}
}
I hope the above code will helps you to handle the File in the Endpoint.
I am using one S3 bucket for my Spring boot application.
Here I have created the folder and uploaded files within this in S3 bucket from my Spring boot application with the help of the following upload function. Now, while I am listing the files within the folder, I am able to see them. But I cannot download them, getting 403 always.
Code snippet for uploading, listing the objects and downloading thereafter:
//Download is failing
public File downloadObject(String filePath) {
File file = null;
log.info("Downloading object {} from s3 bucket {}", filePath, bucketName);
try {
file = File.createTempFile(filePath, "");
file.deleteOnExit();
amazonS3.getObject(new GetObjectRequest(bucketName, filePath), file);
} catch (Exception exception) {
exception.stackTrace();
}
return file;
}
//Following function is working perfectly fine
public List<String> listObjects(String pathPrefix) {
final ListObjectsV2Result listingResponse = amazonS3.listObjectsV2(new ListObjectsV2Request()
.withPrefix(pathPrefix)
.withBucketName(bucketName));
if (Objects.nonNull(listingResponse)) {
List<String> result = listingResponse.getObjectSummaries().stream().map(
S3ObjectSummary::getKey).collect(
Collectors.toList());
result.remove(pathPrefix);
return result;
}
return Collections.emptyList();
}
//uploading is also working fine
public void uploadFile(InputStream inputStream, String filePath)
{
try {
amazonS3.putObject(new PutObjectRequest(bucketName, filePath, inputStream, null));
} catch (SdkClientException exception) {
exception.stackTrace();
}
}
S3 bucket permission is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAReadWriteAccessToBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456:role/abcd"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
You can see, as per the bucket policy, I have given every permission. Even after this, why the download is failing, not able to figure out. Please help.
First thing I am noticing is you are using the old V1 S3 API. Amazon strongly recommends moving to the AWS SDK for Java V2.
The AWS SDK for Java 2.x is a major rewrite of the version 1.x code base. It’s built on top of Java 8+ and adds several frequently requested features. These include support for non-blocking I/O and the ability to plug in a different HTTP implementation at run time.
The Amazon S3 V2 Java API work nicely in a Spring application. There is a multi service example that shows use of the S3 V2 Java API within a Spring BOOT app. In this use case, we get a byte[] to pass to the Amazon Rekognition service.
To get a byte[] from an object in an Amazon S3 bucket (which is what i assume you mean when you use the word download), you can use V2 code like this:
public byte[] getObjectBytes (String bucketName, String keyName) {
s3 = getClient();
try {
// Create a GetObjectRequest instance
GetObjectRequest objectRequest = GetObjectRequest
.builder()
.key(keyName)
.bucket(bucketName)
.build();
// Get the byte[] from this S3 object
ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest);
byte[] data = objectBytes.asByteArray();
return data;
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return null;
}
Refer to this end to end example that shows you how to perform this use case in a Spring app. Look at the code in the S3Service class.
Creating an example AWS photo analyzer application using the AWS SDK for Java
I just ran this app and it works perfectly...
Issue while creating Policy In Android Management API,Iam Using Spring Boot as BackEnd.
this is my request:
{
"name": "testpolicy",
"applications": [
{
"packageName": "com.adobe.reader",
"installType": "REQUIRED_FOR_SETUP",
"defaultPermissionPolicy": "GRANT"
}
],
"kioskCustomization":{
"powerButtonActions": "POWER_BUTTON_ACTIONS_UNSPECIFIED",
"systemErrorWarnings": "SYSTEM_ERROR_WARNINGS_UNSPECIFIED",
"systemNavigation": "SYSTEM_NAVIGATION_UNSPECIFIED",
"statusBar": "STATUS_BAR_UNSPECIFIED",
"deviceSettings": "DEVICE_SETTINGS_UNSPECIFIED"
},
"kioskCustomLauncherEnabled": true
}
This is my response:
JSON parse error: Can not set com.google.api.services.androidmanagement.v1.model.KioskCustomization field com.google.api.services.androidmanagement.v1.model.Policy.kioskCustomization to java.util.LinkedHashMap; nested exception is com.fasterxml.jackson.databind.JsonMappingException: Can not set com.google.api.services.androidmanagement.v1.model.KioskCustomization field com.google.api.services.androidmanagement.v1.model.Policy.kioskCustomization to java.util.LinkedHashMap (through reference chain: com.google.api.services.androidmanagement.v1.model.Policy["kioskCustomization"])
code written in the controller:
#PostMapping("/policies")
public ResponseEntity<com.google.api.services.androidmanagement.v1.model.Policy> savePolicy(
#RequestBody Policy policy, #RequestParam String enterpriseId) throws Exception {
}
control is not even coming inside the controller
I tried your request in Quickstart and did not encounter any error, the cause of your error may come from your implementation (Parser being used, or the format when typed) , I suggest that you try your request in quickstart to confirm that there is nothing wrong with your request.
Also you can check this documentation for a sample app that demonstrates how to provision a corporate-owned, single-use (COSU) device, and send it a reboot command. The app uses the Android Management API Java client library.
I have a simple downstream service for file upload. Sample code
#RestController
#RequestMapping("/file")
public class FileController {
#PostMapping("/upload")
public ResponseEntity<?> uploadFile(#RequestParam("file") MultipartFile file,
#RequestParam(value = "delay", required = false, defaultValue = "0") int delay) throws Exception {
System.out.println(String.join(System.getProperty("line.separator"),
"File Name => " + file.getOriginalFilename(),
"File Size => " + file.getSize() + "bytes",
"File Content Type => " + file.getContentType()));
TimeUnit.MILLISECONDS.sleep(delay);
return ResponseEntity.ok(file.getName() + " uploaded");
}
}
and a CustomExceptionHandler that returns BAD_REQUEST if there is a MultipartException:
#Configuration
#ControllerAdvice
public class CustomExceptionHandler {
#ExceptionHandler(MultipartException.class)
public ResponseEntity<String> handleMultipartException(MultipartException ex) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(ex.getMessage());
}
}
The size limit is 10MB in application.yml:
spring:
servlet:
multipart:
max-file-size: 10MB
max-request-size: 10MB
If I upload a large file, it gives me a a 400 status as expected
When I try to hit the same via spring cloud gateway I get the following result:
and the logs shows following:
2019-11-08 00:36:10.797 ERROR 21904 --- [ctor-http-nio-2] a.w.r.e.AbstractErrorWebExceptionHandler : [86e57f7e] 500 Server Error for HTTP POST "/product-service/file/upload"
reactor.netty.http.client.PrematureCloseException: Connection has been closed BEFORE response, while sending request body
Note that the gateway is configured to take in large file size with RequestSize filter set globally to take way more than 10MB.
How can I get the same response code as given by the downstream service?
Also, I check with traditional Zuul, and i get a 500 error too.
For the gateway, for this particular case I know we can use the RequestSize filter and now the gateway will return the error code, but then we have to identify all the routes that expect this beforehand.
Also, other validation in the API, like authorization, etc will have the same the same issue. The response code produced because of these validations will not propagate up.
Sample code spring-cloud-gateway/product-service/eureka - https://github.com/dhananjay12/spring-cloud/tree/master/spring-routing
can you try to go through a non limitation of the volume of the file directly to without going through the getway? try the value -1 for the properties :
properties file of the MS where you want to upload the file
spring.servlet.multipart.max-file-size =-1
spring.servlet.multipart.max-request-size =-1
if it good, it may give a problem with the zuul proxy's ribbon socket size, there are properties informed for this type of situation, the following:
Properties file of the getway :
ribbon.eager-load.enabled=true
hystrix.command.default.execution.timeout.enabled=false
hystrix.command.default.execution.isolation.strategy=THREAD
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=3999996
ribbon.ConnectTimeout=999999
ribbon.ReadTimeout=999999
ribbon.SocketTimeout=999999
zuul.host.socket-timeout-millis=999999
zuul.host.connect-timeout-millis=999999
zuul.sensitiveHeaders=Cookie,Set-Cookie
I've been trying to setup Https on a stateless API endpoint following the instructions on the microsoft documentations and diverse post/blogs I could find. It works fine locally, but I'm struggling to make it work after deploying it on my dev server getting
Browser : HTTP ERROR 504
Vm event viewer : HandlerAsyncOperation EndProcessReverseProxyRequest failed with FABRIC_E_TIMEOUT
SF event table : Error while processing request: request url = https://mydomain:19081/appname/servicename/api/healthcheck/ping, verb = GET, remote (client) address = xxx, request processing start time = 2018-03-13T14:50:17.1396031Z, forward url = https://0.0.0.0:44338/api/healthcheck/ping, number of successful resolve attempts = 48, error = 2147949567, message = , phase = ResolveServicePartition
in code I have in the instancelistener
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 44338, listenOptions =>
{
listenOptions.UseHttps(GetCertificate());
});
})
servicemanifest
<Endpoint Protocol="https" Name="SslServiceEndpoint" Type="Input" Port="44338" />
startup
services.AddMvc(options =>
{
options.SslPort = 44338;
options.Filters.Add(new RequireHttpsAttribute());
});
+
var options = new RewriteOptions().AddRedirectToHttps(StatusCodes.Status301MovedPermanently, 44338);
app.UseRewriter(options);
here is what I got in azure (deployed through ARM template)
Health probes
NAME PROTOCOL PORT USED BY
AppPortProbe TCP 44338 AppPortLBRule
FabricGatewayProbe TCP 19000 LBRule
FabricHttpGatewayProbe TCP 19080 LBHttpRule
SFReverseProxyProbe TCP 19081 LBSFReverseProxyRule
Load balancing rules
NAME LOAD BALANCING RULE BACKEND POOL HEALTH PROBE
AppPortLBRule AppPortLBRule (TCP/44338) LoadBalancerBEAddressPool AppPortProbe
LBHttpRule LBHttpRule (TCP/19080) LoadBalancerBEAddressPool FabricHttpGatewayProbe
LBRule LBRule (TCP/19000) LoadBalancerBEAddressPool FabricGatewayProbe
LBSFReverseProxyRule LBSFReverseProxyRule (TCP/19081) LoadBalancerBEAddressPool SFReverseProxyProbe
I have a Cluster certificate, ReverseProxy Certificate, and auth to the api through azure ad and in ARM
"fabricSettings": [
{
"parameters": [
{
"name": "ClusterProtectionLevel",
"value": "[parameters('clusterProtectionLevel')]"
}
],
"name": "Security"
},
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "ApplicationCertificateValidationPolicy",
"value": "None"
}
]
}
],
Not sure what else could be relevant, if you have any ideas/suggestions, those are really welcome
Edit : code for GetCertificate()
private X509Certificate2 GetCertificate()
{
var certificateBundle = Task.Run(async () => await GetKeyVaultClient()
.GetCertificateAsync(Environment.GetEnvironmentVariable("KeyVaultCertifIdentifier")));
var certificate = new X509Certificate2();
certificate.Import(certificateBundle.Result.Cer);
return certificate;
}
private KeyVaultClient GetKeyVaultClient() => new KeyVaultClient(async (authority, resource, scope) =>
{
var context = new AuthenticationContext(authority, TokenCache.DefaultShared);
var clientCred = new ClientCredential(Environment.GetEnvironmentVariable("KeyVaultClientId"),
Environment.GetEnvironmentVariable("KeyVaultSecret"));
var authResult = await context.AcquireTokenAsync(resource, clientCred);
return authResult.AccessToken;
});
Digging into your code I've realized that there is nothing wrong with it except one thing. I mean, as you use Kestrel, you don't need to set up anything extra in the AppManifest as those things are for Http.Sys implementation. You don't even need to have an endpoint in the ServiceManifest(although recommended) as all these things are about URL reservation for the service account and SSL binding configuration, neither of which is required with Kestrel.
What you do need to do is to use IPAddress.IPv6Any while you configure SSL. Aside the fact that it turns out to be the recommended way which allows you to accept both IPv4 and IPV6 connections, it also does a 'correct' endpoint registration in the SF. See, when you use IPAddress.Any, you'll get the SF setting up an endpoint like https://0.0.0.0:44338, and that's how the reverse proxy will try to reach the service which obviously wouldn't work. 0.0.0.0 doesn't correspond to any particular ip, it's just the way to say 'any IPv4 address at all'. While when you use IPAddress.IPv6Any, you'll get a correct endpoint mapped to the vm ip address that could be resolved from within the vnet. You could see that stuff by yourself in the SF Explorer if you go down to the endpoint section in the service instance blade.