Connect to AWS RDS using AWS Secrets Manager - spring

I'm new to AWS and I tried to use the secrets manager to connect to an RDS database. I managed to do it with spring Datasource but I want the connection to rds to be done using the DB identifier.
I don't know exactly how to do it, this is my current application.properties
#spring.datasource.url=jdbc-secretsmanager:postgresql://database-1.c5xr47tuzrvd.us-west-2.rds.amazonaws.com/postgres
#spring.datasource.driver-class-name=com.amazonaws.secretsmanager.sql.AWSSecretsManagerPostgreSQLDriver
#spring.datasource.username=/secrets/shopping-cart/db
cloud.aws.rds.database-1.username=postgres
cloud.aws.rds.database-1.password=****
cloud.aws.rds.database-1.databaseName=postgres
Can you please guide me on how I can do it?
Thank you!

Here is an AWS Doc that walks you through how to perform this use case in a Spring Boot app. In this example use case, an Aurora Serverless database is used.
Furthermore, to successfully connect to the database using the RdsDataClient object (which is part of the AWS SDK for Java V2), you have to set up an AWS Secrets Manager secret that is used for authentication. This doc shows you how to hook this value into the Java logic as well.
Note that you can only use the RdsDataClient object for an Aurora Serverless DB cluster or an Aurora PostgreSQL.
To use the RdsDataClient object, you require the following two Amazon Resource Name (ARN) values:
An ARN of the Aurora Serverless database.
An ARN of the AWS Secrets Manager secret that is used to access the database.
To read this example use case, see:
Creating the Amazon Aurora Serverless application using the AWS SDK for Java

Related

Spring Boot on AWS via CDK

I have microservice stack working locally - Docker, Eureka, ConfigServer, Spring Boot and multiple PostgreSQL database. However, now that it is time to deploy to my AWS account I am unable to find some good documentation on how to do this via CDK V2. I really want the ability to deploy this way as I will have multiple duplicate (DEV/QA/PROD) environments.
I am struggling on how to build this via CDK - multiple subnets (one for db and one for services), Route53, S3, IAM etc. Most examples I find are for a single service, but not how to create the RDS and connect them.
Can anyone point me to some good tutorials or examples so I can move forward?

How to connect to the AWS services using IAM roles ARN in a Spring Boot application

I am using AWS SQS, SNS, and S3 services. So for that i have created the roles and queues in aws. Now I have roles ARNs and Queues ARNs. How can I connect to these services through my spring boot app?
I have gone through this link, but i didn't get how to use the cerdentials from AWSCredentialsProvider. Please help me in this.
Thanks in advance!
"I didn't get how to use the cerdentials from AWSCredentialsProvider."
I am going to answer this question using the recommended SDK - which is AWS SDK for Java V2. You may find V1 in old online content - but using V1 is not best practice.
There are different ways of handling creds when writing a Java App that uses AWS SDK for Java V2 - including a Spring BOOT app.
You can use an Environment variable provider:
Region region = Region.US_EAST_1;
RdsDataClient dataClient = RdsDataClient.builder()
.credentialsProvider(EnvironmentVariableCredentialsProvider.create())
.region(region)
.build();
You can use the shared credentials and config files. This reads your creds from a Credential file located under .aws.
Region region = Region.US_EAST_1;
RdsDataClient dataClient = RdsDataClient.builder()
.region(region)
.build();
You can use a StaticCredentialsProvider where you put your creds in the code.
AwsBasicCredentials credentials = AwsBasicCredentials.create("<KEY>", "<Secret Key>");
StaticCredentialsProvider staticCredentials = StaticCredentialsProvider.create(credentials);
Region region = Region.US_EAST_1;
DynamoDbClient ddb = DynamoDbClient.builder()
.region(region)
.credentialsProvider(staticCredentials)
.build();
All of these credential ways are explained in the AWS Java V2 Developer Guide -- which I strongly recommend that any developer programming with the AWS SDK for Java V2 SDK read.
Finally, you will find code examples of writing a Spring BOOT example with the AWS SDK for Java v2 in the AWS Github code repo. For example.
Creating your first AWS Java web application
This creates an example Spring Boot web app that submits data to an Amazon DynamoDB table.
So the idea is that Assuming Roles is not application part, it's the infra service where your application is executing on.
For e.g.: If you have Spring Boot application running on EC2 (or Fargate, or Lambda, or Elastic Beanstalk or anywhere in AWS) that EC2 should have assumed the role. The "role" then should have rights to access SQS (or any service). Now when your application will try to use SQS running on EC2 with right role, everything will be fine.
If you're testing the code on your machine then it will not work as your machine has not assumed the role.

How to stop spring cloud AWS secrets manager trying to load profile based secrets

I'm using spring cloud AWS secrets manager support to load in configuration defined by terraform which creates the application secret defaults.
Once adding a policy statement to the services accessing the secret I run into spring not starting as it's attempting to read all kinds of secrets for profiles that do not exist in secrets manager.
How can I restrict the spring cloud secrets manager support to only read secrets I have explicitly granted access without needing to create empty secrets for every profile?
This is not possible yet unfortunately. We have pull request that enables skipping loading profiles that will likely be merged in 2.3 and we are re-thinking Secrets Manager integration for 3.0.

Configuration or link required to connect cluster of Pivotal Coud Cache in Spring boot microservices

I am setting up the Spring-boot microservices with the cluster bi-direction Pivotal cloud cache.
I have set up the bi-directional cluster in Pivotal Cloud, I have a list of locators with ports.
I have already some online docs.
https://github.com/pivotal-cf/PCC-Sample-App-PizzaStore
But couldn't understand the on which configuration the spring boot app will know to connect.
I am looking for some tutorial or some reference where I can have spring boot app linked up with the PCC(gemfire)
The way you configure a app running in PCF (Pivotal Cloud Foundry) to talk to a PCC (Pivotal Cloud Cache) service instance is by binding the app to that service instance. You can bind it either by running the cf bind command or by adding the service name in the app`s manifest.yml, something like the below
path: build/libs/cloudcache-pizza-store-1.0.0-SNAPSHOT.jar
services:
- dev-service-instance
I hope you are using Spring Boot for Apache Geode & Pivotal GemFire (SBDG) in your app, if not I recommend you to use it as it makes connecting to PCC service instance extremely easy. SBDG has the logic to extract credentials, hostname:ports needed to connect to a service instance.
You as a app developer just need to
Create the service instance.
Bind your app to the service instance.
The boilerplate code for configuring credentials, hostnames, ips are handled by SBDG.
When you deploy an application in Cloud Foundry, (or Pivotal Cloud), you need to bind it to one or more services. Service details are then automatically exposed to the app via the VCAP_SERVICES environment variable. In the case of PCC this will include the name and port of the locator. By adding the spring-geode-starter (or spring-gemfire-starter) jar to the application it will automatically process the VCAP_SERVICES value and extract the necessary endpoint information in order to connect to the cluster.
Furthermore, if security is enabled on your PCC instance, you will also need to have created a service key. As with the locator details, the necessary credentials will be exposed via VCAP_SERVICES and the starter jar will automatically process and configure them.

How can i configure Spring Cloud Config Server with refresh functionalities on AWS ECS

I am migrating a Spring Boot application from PCF to AWS ECS which is currently using cloud config server reading properties from git repo, and aws RDS. Now in ECS is there a way we can implement config servers along with refresh in ECS AWS.
I think ECS operates on a different level.
Spring Cloud Config server is a solution that works especially good with spring boot based applications. For example, a refresh option that you've mentioned is implemented as a special Scope which is purely a spring (applicative) thing.
On the other hand, AWS ECS (stands for Elastic Container Service) provides a way to work with containers in a general sense (with scaling and everything). It doesn't require the containers to be spring-based or even java based.
So, I think you might want to consider keeping a spring boot driven microservice for config server just like you have now, but wrap it into docker container and deploy it in AWS ECS

Resources