I need to deploy my Java application to AWS EC2 Instance using terraform. The catch here, we should not use *.pem file to deploy the application.
I try to create ELB and associate instances using terraform.I can able to deploy the application using ssh and pem file to ec2 instances Private IPs. But we shouldn't use *.pem or *.ppk file, as it'll not be allowed in production servers.
I tried using chef with terraform , but that also requires *.pem to connect to AWS Instances.
Please let me know the detailed steps/suggestions of how to deploy the application using terraform without using pem file.
If you can't make any changes to your instance after creating it (including deploying the application) then you will need to bake any and all changes into the AMI that Terraform deploys.
You might want to look into using Packer to create AMIs with your intended configuration and then use Terraform to deploy these AMIs.
For reference, this strategy is known as "immutable infrastructure" so you might want to do some further reading into this area.
If instead it's simply that SSH connectivity is not allowed and you can make changes over other ports then you should be able to use an AMI that has a Chef client, Puppet agent or Salt minion on it (there may well be other tools that work over a non SSH protocol/port but this restriction rules out Ansible) and then use any of those tools to continue to configure your instance. Obviously you could find a suitable AMI from the AMI marketplace or, once again, use Packer to set up the relevant configuration management client.
Related
I have a simple web application running on my machine (Mac) using Docker. I want this application to load secrets from AWS Secret Manager. Does the application need to assume an IAM role to load the secret?
Also, I will eventually deploy this container to a self-managed Kubernetes cluster (no EKS/ECS). Is the process of loading secrets similar?
This is a Python fastAPI application, but examples in Spring Boot are welcomed. I'm more interested in the process.
There are more ways to Rome in this case, but one way might be:
Create a user that has access to the KMS key;
Create an access key for that user;
Set the access key and username for that user as an environment variable in your local environment.
When deploying to your own K8S cluster, you can also set the environment variables on the Pod (probably through something of a CI/CD pipeline).
The boto3 module knows a certain order in which it will try to authenticate itself, you can find more details here. Just make sure you name the environment variables correctly.
Well, I have a client with an intranet infrastructure, that can't be accessed by the internet or VPN, so I need to access through TeamViewer.
This client gave me 10 VMs (Linux Centos 6) to work (can't create others or destroy it). So I need to prepare this infrastructure to run my CI/CD and deliver the software, then I need these services running before my software deploy:
Docker
Mongo DB
Postgres
Nginx
Jenkins
I'm thinking about two options to solve it:
TerraformCLI (remember I will need to access client through Teamviewer and run terraform apply)
Ansible (Here I can list the 10 machines and execute all together with 1 playbook).
I heard about Terraform is more to provision Servers (VM, EC2 ...), VPC, Subnet, LoadBalancers, but Ansible is more about configuring each machine, in a more granular way. If this is correct I think Ansible is the correct choice for me.
Any suggestions guys?
Yes.
Terraform provision your environment from scratch. It is a Infrastructure as Code tool.
Ansible configures your environment. It is a configuration management tool.
Often, people combine both of them. First provision the network stack, servers using Terraoform and then configure the applications inside the servers using Ansible.
You already have the VMs hence opting for configuration management tool(Chef, Ansible, Puppet, Salt Stack) better fits your use case.
Is there a way to host Moqui on AWS? I was trying to host Moqui using a EC2 instance but couldn't figure out a way to connect them.
The Run and Deploy document on moqui.org has a section for a simple recommended deployment using ElasticBeanstalk and RDS:
https://www.moqui.org/m/docs/framework/Run+and+Deploy#AWSElasticBeanstalkandRDS
With more details about how you want to set things up on AWS the answer to how might vary from this.
For clustered setups things get more involved to get the right settings for Hazelcast AWS discovery and it is best to use an external ElasticSearch server like an AWS ElasticSearch instance and configure Moqui using environment variables to use the Java REST Client mode instead of the Embedded Node mode. Settings for the moqui-hazelcast and moqui-elasticsearch components can be seen in the MoquiConf.xml file in each component.
I am looking for advice/ideas on how to continuously deploy new features to a Spring Boot web application that is hosted on an AWS EC2 instance. My current workflow:
bootRepackage my application to create a war file.
Upload that file to AWS.
Add a new feature to my application.
bootRepackage again.
Remove the current war from AWS, and upload the new one.
This is obviously not a good workflow, as the application needs to be restarted which could result in 1) downtime and 2) entries in the database being lost (if I'm using Spring's default H2 database - I am not, I'm using a standalone SQL server, but just making the point for this question) so I am wanting to streamline it.
Is there any way to add a new feature to the current instance of the service on AWS? Is it possible to recompile the code "one the fly" to prevent the need to restart the application?
Is there any way of creating a better setup that would allow me to just merge a new branch to master locally, and push that with the same instance still in prod except with this new feature?
Thank you in advance!
Update, is this really the correct answer?
If you using single instance of aws and deploying the application to EC2 instance, please assign Elastic IP for the AWS EC2 instance.
An Elastic IP address is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
Deploy the new version of the application in another AWS EC2 instance
When the application is ready, reassign the Elastic IP from the existing EC2 instance to new EC2 instance
Elastic IPs are the simplest way to implement the blue-green switch.
I am planning to migrate from Ec2 classic to EC2 VPC. My application reads messages from SQS, download assets from S3 and perform actions mentioned in the SQS messages and then updates RDS. I have following queries
Is it beneficial for me to migrate to Amazon VPC from Classic
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy code using capistrano. But in VPC there is a concept of private IP address and you cannot access a machine inside a subnet.So my question is:
How should I deploy code on the EC2 instances or rather how should I connect to them?
Thank You.
This questions is pretty broad but I'll take stab at it:
Is it beneficial for me to migrate to Amazon VPC from Classic
It's beneficial if you care about security of your data in transit and at rest. In a VPC none of your traffic is exposed to the outside and you can chose which components you want to expose in case you want to receive traffic/data from the outside. i.e Your ELB or ELBs.
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy
code using capistrano. But in VPC there is a concept of private IP
address and you cannot access a machine inside a subnet. So my question
is: How should I deploy code on the EC2 instances or rather how should
I connect to them?
You can actually assign a public IP to your EC2 machines in a VPC if you choose to. You can use that IP to deploy your code from the outside.
You can read about it here: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html
If you want more security you can always deploy from a machine in your VPC (that has SSH access to the outside). You can ssh to that machine and then run cap deploy from there.