How does a local docker container load secrets from AWS Secret Manager? - spring-boot

I have a simple web application running on my machine (Mac) using Docker. I want this application to load secrets from AWS Secret Manager. Does the application need to assume an IAM role to load the secret?
Also, I will eventually deploy this container to a self-managed Kubernetes cluster (no EKS/ECS). Is the process of loading secrets similar?
This is a Python fastAPI application, but examples in Spring Boot are welcomed. I'm more interested in the process.

There are more ways to Rome in this case, but one way might be:
Create a user that has access to the KMS key;
Create an access key for that user;
Set the access key and username for that user as an environment variable in your local environment.
When deploying to your own K8S cluster, you can also set the environment variables on the Pod (probably through something of a CI/CD pipeline).
The boto3 module knows a certain order in which it will try to authenticate itself, you can find more details here. Just make sure you name the environment variables correctly.

Related

How to host Moqui on AWS EC2

Is there a way to host Moqui on AWS? I was trying to host Moqui using a EC2 instance but couldn't figure out a way to connect them.
The Run and Deploy document on moqui.org has a section for a simple recommended deployment using ElasticBeanstalk and RDS:
https://www.moqui.org/m/docs/framework/Run+and+Deploy#AWSElasticBeanstalkandRDS
With more details about how you want to set things up on AWS the answer to how might vary from this.
For clustered setups things get more involved to get the right settings for Hazelcast AWS discovery and it is best to use an external ElasticSearch server like an AWS ElasticSearch instance and configure Moqui using environment variables to use the Java REST Client mode instead of the Embedded Node mode. Settings for the moqui-hazelcast and moqui-elasticsearch components can be seen in the MoquiConf.xml file in each component.

Continuously deploying features to Spring Boot application hosted by AWS

I am looking for advice/ideas on how to continuously deploy new features to a Spring Boot web application that is hosted on an AWS EC2 instance. My current workflow:
bootRepackage my application to create a war file.
Upload that file to AWS.
Add a new feature to my application.
bootRepackage again.
Remove the current war from AWS, and upload the new one.
This is obviously not a good workflow, as the application needs to be restarted which could result in 1) downtime and 2) entries in the database being lost (if I'm using Spring's default H2 database - I am not, I'm using a standalone SQL server, but just making the point for this question) so I am wanting to streamline it.
Is there any way to add a new feature to the current instance of the service on AWS? Is it possible to recompile the code "one the fly" to prevent the need to restart the application?
Is there any way of creating a better setup that would allow me to just merge a new branch to master locally, and push that with the same instance still in prod except with this new feature?
Thank you in advance!
Update, is this really the correct answer?
If you using single instance of aws and deploying the application to EC2 instance, please assign Elastic IP for the AWS EC2 instance.
An Elastic IP address is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
Deploy the new version of the application in another AWS EC2 instance
When the application is ready, reassign the Elastic IP from the existing EC2 instance to new EC2 instance
Elastic IPs are the simplest way to implement the blue-green switch.

Deploy Application to AWS EC2 Instance using terraform

I need to deploy my Java application to AWS EC2 Instance using terraform. The catch here, we should not use *.pem file to deploy the application.
I try to create ELB and associate instances using terraform.I can able to deploy the application using ssh and pem file to ec2 instances Private IPs. But we shouldn't use *.pem or *.ppk file, as it'll not be allowed in production servers.
I tried using chef with terraform , but that also requires *.pem to connect to AWS Instances.
Please let me know the detailed steps/suggestions of how to deploy the application using terraform without using pem file.
If you can't make any changes to your instance after creating it (including deploying the application) then you will need to bake any and all changes into the AMI that Terraform deploys.
You might want to look into using Packer to create AMIs with your intended configuration and then use Terraform to deploy these AMIs.
For reference, this strategy is known as "immutable infrastructure" so you might want to do some further reading into this area.
If instead it's simply that SSH connectivity is not allowed and you can make changes over other ports then you should be able to use an AMI that has a Chef client, Puppet agent or Salt minion on it (there may well be other tools that work over a non SSH protocol/port but this restriction rules out Ansible) and then use any of those tools to continue to configure your instance. Obviously you could find a suitable AMI from the AMI marketplace or, once again, use Packer to set up the relevant configuration management client.

How to set IAM role for container level access in Mesos

I am deploying my microservice into an EC2 instance via Mesos. The problem is I am sharing my EC2 instance with other team's microservices. All these microservices deal with difference S3 buckets and we dont want other guys to have access to our buckets. I need to assign IAM role to my container so that only I can access my S3 bucket via microservices deployed in EC2 instance.
We are not using ECS and we deploy using Mesos. Any input or comment is appreciated. Thanks in advance.
There is no native AWS support for this. In the meantime you can use Lyft's metadataproxy (see also the blog post).
Quoting the blog:
We had an idea to build a web service that proxies calls to the metadata service on http://169.254.169.254 and pass through most of the calls to the real metadata service, but capture calls to the IAM endpoints. By capturing the IAM endpoints we can decide which IAM credentials we’ll hand back.
...
To know which IAM roles should be assumed, the metadataproxy has access to the docker socket. When it gets a request, it looks up the container, based on its request IP, finds that container’s environment variables and uses the value of the IAM_ROLE environment variable as the role to assume. It then uses STS to assume the role, caches the credentials in memory (for further requests) and returns them back to the caller. If the credentials cached in memory are set to expire, the proxy will re-assume the credentials.

How can I setup and deploy a database with Deis (PaaS)

I'm trying to setup a database with Deis. I know this is possible, but there doesn't seem to be any documentation about how to do it other than setting an ENV variable.How could I setup say a MongoDB or Cassandra docker container and then deploy that and have my deis app use it?
If you're trying to deploy now, a possible solution is to set up a docker container, have it publicly route-able, and then configure your application to use that container through an environment variable following Heroku's 12 factor app best practices. There is a feature request for a Deis service gateway that will act like Heroku's Add-on Marketplace, but it's not there yet.

Resources