I'm considering using Heroku for a NodeJS app, and I was wondering if their Dynos enjoy the free internal data transfer inside the AWS network.
I want to use DynamoDB, ElastiCache, RDS, SQS and a bunch of other AWS offerings - if I can connect to all of them from Heroku, which region and AZ do I need to set them up in to talk to them for free from the Heroku Dynos?
Heroku runs in US-East region so as long as you setup there you shouldn't incur any transfer costs between dynos and other services.
There's more details on the https://devcenter.heroku.com/articles/amazon_rds page - it relates to RDS but a lot of it is general Amazon stuff like security groups etc.
Related
I would like to connect Heroku dynos to external compute and storage resources (i.e. on-prem, multi-cloud, etc.)
Heroku Private and Shield have VPC peering with AWS, but introduces additional overhead.
Does ZeroTier work with Heroku dynos? If so, are there are any gotchas to be aware of?
I want to host a production Rails application on Heroku but have some doubts. Can there be any problems with accessing application from Russia because of the Roskomnadzor ban?
Heroku runs on Amazon Web Services, and it looks like many AWS IP addresses have been blocked in the past:
On 13 April 2018, messaging service Telegram was banned... The ban has been enforced via the blockage of over 15.8 million IP addresses. IPs associated with Amazon Web Services and Google Cloud Platform are included in the block, due to Telegram's use of these platforms; this measure resulted in collateral damage due to usage of the platforms by other services in the country, including... many other unknown websites being blocked for no reason for a month.
I'm not sure that any host would be completely safe from this kind of block.
I am planning to migrate from Ec2 classic to EC2 VPC. My application reads messages from SQS, download assets from S3 and perform actions mentioned in the SQS messages and then updates RDS. I have following queries
Is it beneficial for me to migrate to Amazon VPC from Classic
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy code using capistrano. But in VPC there is a concept of private IP address and you cannot access a machine inside a subnet.So my question is:
How should I deploy code on the EC2 instances or rather how should I connect to them?
Thank You.
This questions is pretty broad but I'll take stab at it:
Is it beneficial for me to migrate to Amazon VPC from Classic
It's beneficial if you care about security of your data in transit and at rest. In a VPC none of your traffic is exposed to the outside and you can chose which components you want to expose in case you want to receive traffic/data from the outside. i.e Your ELB or ELBs.
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy
code using capistrano. But in VPC there is a concept of private IP
address and you cannot access a machine inside a subnet. So my question
is: How should I deploy code on the EC2 instances or rather how should
I connect to them?
You can actually assign a public IP to your EC2 machines in a VPC if you choose to. You can use that IP to deploy your code from the outside.
You can read about it here: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html
If you want more security you can always deploy from a machine in your VPC (that has SSH access to the outside). You can ssh to that machine and then run cap deploy from there.
I have an application deployed to Heroku. I'm using a service that requires me to access their SFTP server using a static ip address. I know Heroku dynos are unreliable in this regard. I have successfully achieved this using the Proximo addon, however, its too expensive for the amount of traffic that I'll be sending (around 500 MB/month). Is there an alternate to this? I'm inclined towards using an EC2 instance but not quite sure what's required to create a proxy or whatever.
I'd go with an EC2 micro instance; pushing bits around doesn't really consume much CPU, so it's unlikely to get throttled. I would then give that instance an elastic IP address and communicate that address to the other service. (Whatever I choose to do later, I can always spin up another instance and associate it to that IP.) I would then deploy a SOCKS proxy (Dante?); SOCKS has pretty widespread application support, and it can handle SFTP just fine.
From here, there are a couple details specific to Heroku -- for one, you'll want to configure your proxy server's EC2 security group such that Heroku can access it (see Dynos and the Dyno Manifold). You'll also want to enable authentication on the SOCKS server, since granting Heroku access to your proxy grants everyone in Heroku access to your proxy. Then, heroku config:set SOME_SERVICE_SOCKS_PROXY=socks://user:pass#ip-10-1-2-3.ec2.internal, and have your application look for that environment variable and do the right thing.
You'll likely be paying $0.01/GB for intra-region data transfer between your proxy and Heroku, since statistically, your application will be in a different availability zone most of the time. Heroku dynos last about 24 hours in production, so while the exact location will dance around unpredictably, it'll probably land in the $0.008/GB range in aggregate. You'll also be paying for the micro instance itself (though reserved instances make them stupid cheap) as well as the usual AWS Internet data transfer rates.
I'm trying to create a personal/professional website within a college-domain. From the university I've requested a static-IP address which is directed to a website-name "http://lastname.someuniversity.edu". I would like to setup an Amazon EC2 instance to host a website.
I know how to create/administer the website on the EC2 instance I just don't know how to get the EC2 instance to talk to the university (and vice-versa). The IT person at the university wasn't terribly helpful.
i know how to setup a local machine to run as the webserver just not how to get the Amazon EC2 instance to 'sit inside" the university.
Thanks for the help,
Will
If you want the Amazon EC2 instance "to sit inside your university" you may want to establish a VPN connection by using the Amazon Virtual Private Cloud service.
This service is still in beta, but it has been publicly available for about a year. A connection currently costs $0.05 per hour (circa $36.5 per month) and you also pay for data transfer.
Check out Amazon Virtual Private Clouds. I think it is exactly what you are asking for.
You will need to work with your "IT person" to setup a VPN connection between your premises and the EC2 cloud. In practice you will likely need to:
1) Define a subnet for your EC2 connections (ie. 10.10.10.x).
2) Build a VPN tunnel between your university and Amazon (Virtual Private Cloud).
3) Enable any routing or firewall changes at the university.
You know you've got it working when you can 'ping' the EC2 host from within your premises.
BTW, I have recently released a new service that specifically runs on Amazon EC2. About 20% of people are now asking for VPC in order to use our service (Virtual Lab Management), and so I can attest that it's a solution that has raised interest in a lot of large organizations.