Accessing Openstack Swift deployment on EC2 VM remotely - amazon-ec2

So, i have deployed Swift Object Storage on an EC2 VM. I am able to access the objects using curl/python client for swift from the same VM.
However, I am not sure how to access the same remotely using the public URL for my swift deployment.
On browsing, I came across swift3 (https://github.com/openstack/swift3), but I am not sure how to configure this and whether this is even applicable in my case (As I am not using S3 but only using EC2).

Related

How to host Moqui on AWS EC2

Is there a way to host Moqui on AWS? I was trying to host Moqui using a EC2 instance but couldn't figure out a way to connect them.
The Run and Deploy document on moqui.org has a section for a simple recommended deployment using ElasticBeanstalk and RDS:
https://www.moqui.org/m/docs/framework/Run+and+Deploy#AWSElasticBeanstalkandRDS
With more details about how you want to set things up on AWS the answer to how might vary from this.
For clustered setups things get more involved to get the right settings for Hazelcast AWS discovery and it is best to use an external ElasticSearch server like an AWS ElasticSearch instance and configure Moqui using environment variables to use the Java REST Client mode instead of the Embedded Node mode. Settings for the moqui-hazelcast and moqui-elasticsearch components can be seen in the MoquiConf.xml file in each component.

Continuously deploying features to Spring Boot application hosted by AWS

I am looking for advice/ideas on how to continuously deploy new features to a Spring Boot web application that is hosted on an AWS EC2 instance. My current workflow:
bootRepackage my application to create a war file.
Upload that file to AWS.
Add a new feature to my application.
bootRepackage again.
Remove the current war from AWS, and upload the new one.
This is obviously not a good workflow, as the application needs to be restarted which could result in 1) downtime and 2) entries in the database being lost (if I'm using Spring's default H2 database - I am not, I'm using a standalone SQL server, but just making the point for this question) so I am wanting to streamline it.
Is there any way to add a new feature to the current instance of the service on AWS? Is it possible to recompile the code "one the fly" to prevent the need to restart the application?
Is there any way of creating a better setup that would allow me to just merge a new branch to master locally, and push that with the same instance still in prod except with this new feature?
Thank you in advance!
Update, is this really the correct answer?
If you using single instance of aws and deploying the application to EC2 instance, please assign Elastic IP for the AWS EC2 instance.
An Elastic IP address is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
Deploy the new version of the application in another AWS EC2 instance
When the application is ready, reassign the Elastic IP from the existing EC2 instance to new EC2 instance
Elastic IPs are the simplest way to implement the blue-green switch.

Deploy Application to AWS EC2 Instance using terraform

I need to deploy my Java application to AWS EC2 Instance using terraform. The catch here, we should not use *.pem file to deploy the application.
I try to create ELB and associate instances using terraform.I can able to deploy the application using ssh and pem file to ec2 instances Private IPs. But we shouldn't use *.pem or *.ppk file, as it'll not be allowed in production servers.
I tried using chef with terraform , but that also requires *.pem to connect to AWS Instances.
Please let me know the detailed steps/suggestions of how to deploy the application using terraform without using pem file.
If you can't make any changes to your instance after creating it (including deploying the application) then you will need to bake any and all changes into the AMI that Terraform deploys.
You might want to look into using Packer to create AMIs with your intended configuration and then use Terraform to deploy these AMIs.
For reference, this strategy is known as "immutable infrastructure" so you might want to do some further reading into this area.
If instead it's simply that SSH connectivity is not allowed and you can make changes over other ports then you should be able to use an AMI that has a Chef client, Puppet agent or Salt minion on it (there may well be other tools that work over a non SSH protocol/port but this restriction rules out Ansible) and then use any of those tools to continue to configure your instance. Obviously you could find a suitable AMI from the AMI marketplace or, once again, use Packer to set up the relevant configuration management client.

How to access elasticsearch in EC2?

I am new with AWS, and recently spun up an EC2 instance, and installed elasticsearch on it. I can now access elasticesearch via http://localhost:9200 after ssh into the box, but I am a little stuck on how to access this from external.
What's the best practice if I would like to write some app to access this EC2 instance? And how can I configure it so it can be access externally?
Any help would be appreciated.
Thanks.

Deploy application on AWS VPC

I am planning to migrate from Ec2 classic to EC2 VPC. My application reads messages from SQS, download assets from S3 and perform actions mentioned in the SQS messages and then updates RDS. I have following queries
Is it beneficial for me to migrate to Amazon VPC from Classic
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy code using capistrano. But in VPC there is a concept of private IP address and you cannot access a machine inside a subnet.So my question is:
How should I deploy code on the EC2 instances or rather how should I connect to them?
Thank You.
This questions is pretty broad but I'll take stab at it:
Is it beneficial for me to migrate to Amazon VPC from Classic
It's beneficial if you care about security of your data in transit and at rest. In a VPC none of your traffic is exposed to the outside and you can chose which components you want to expose in case you want to receive traffic/data from the outside. i.e Your ELB or ELBs.
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy
code using capistrano. But in VPC there is a concept of private IP
address and you cannot access a machine inside a subnet. So my question
is: How should I deploy code on the EC2 instances or rather how should
I connect to them?
You can actually assign a public IP to your EC2 machines in a VPC if you choose to. You can use that IP to deploy your code from the outside.
You can read about it here: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html
If you want more security you can always deploy from a machine in your VPC (that has SSH access to the outside). You can ssh to that machine and then run cap deploy from there.

Resources