Do I need Amazon's EC2, Cloudfront, RDS? - amazon-ec2

I want to publish a web site on Amazon's servers, that:
Runs CakePHP
Uses MySQL to store data
Lets users upload audio through flash (currently using a hosted Flash Media Server), and listen to the files later
Do I need Amazon's EC2 for the website, RDS for the MySQL database, and CloudFront for the FMS? I'd really like a walkthrough of which services I should use.
Thanks.

First of all you need EC2 service in order to have a virtual machine, where you can install Apache, PHP and your Web Application.
Then you also need a database server and data repository for the media files. The recommended way is exactly what you suggest: RDS for MySQL and CloudFront as the file repository.
Initially none of the above services (RDS, CloudFront and even EBS) were available. Developers have no way to use a MySQL database, because even if it was installed in an EC2 instance, the instance isn't guaranteed to stay up and running and if the instance is lost, the data is also lost. For this reason EBS was introduced. It created a mounted storage with guaranteed persistence that you could access from the EC2 instance. Theoretically you could install MySQL there and use it to store the flash files. If you only want to serve files through the HTTP protocol, there is no problem using EBS.
CloudFront however has some advantages:
Users are automatically routed to the nearest edge location for high performance delivery of your content.
You can also use it to stream content through the the RTMP protocol.
You don't have to worry about the size of the storage. With EBS you create a storage with a specific size. This could be a problem if you later find out that you need more storage. With CloudFront the files are installed in S3 and you do not need to worry about their size.
You do not waste web server capacity. If you use EBS, the files will be served by the server in EC2.
You could also use S3, but you wouldn't able to use the RTMP protocol and you would need to manually create links to your files. Also, it wouldn't be possible to use your domain name for the files.
RDS also has some advantages over installing MySQL in EC2, EBS:
automated database backups
You can monitor your database with Amazon CloudWatch (free service)

You need EC2 to launch instance and create your LAMP server. RDS is good if you don't need to manage MySql db yourself, but one limiting factor of RDS is you can't have DB replication.
For persistent storage, you can make use EBS or S3 for data file.

One thing not mentioned in any of these replies is the security that may (or may not) need to go around your file access. Cloud networks are good for publicly accessible data, but I haven't seen a cloud network yet that will provide a granular level of file access on a per user basis. While you may be able to obfuscate the url's to access files so that it isn't easy to sequentially guess audio file IDs, that may not be enough if people are keeping private audio. Not saying don't do it, just make the decision with care.

Related

Umbraco 8 application with load balancing

I have an Umbraco 8.2.2 application which is hosted in AWS EC2 server.
Recently, I encounter server availability issues that caused downtimes once in a while.
One of the solutions I've thought about is to maintain an additional AWS EC2 server which hosts the same application (Same code, same database) and configure load balancing between them.
It will host both client and server.
To what extent is this possible, in your experience?
How can I handle obstacles like shared media & cache folders, as they should be the same?
I've heard about S3 as an option.
What additional obstacles may I face, and what should I put my focus on?
Thanks.
This sounds like a good use case for Amazon EFS which offers you a shared POSIX file system. You can mount the directories where your media and cache folders are located to the EFS share and then mount the EFS share to the backend EC2 instances that are behind the load balancer. This solution requires very little or no changes to your application itself, you will just be changing the storage media for certain files in your application.
As for obstacles, EFS is a network filesystem, therefore, it is generally not recommended to execute code from your EFS share or to use it for applications that require very low storage latency. If that's the case then you can consider Amazon FSx but that's a very expensive solution.
If you can't avoid executing your code from EFS, just try it out and see how it affects your application's performance. EFS works fine for plenty of web application use cases. Here is a tutorial on how to host a simple website using EFS behind a load balanced environment to get you started.
If EFS is not an option, then you could try to offload your static content to Amazon S3 and serve it through CloudFront. This is probably a cheaper option and offloads a lot of traffic from your load balancer and EC2 instances but it is also probably more work because you have to refractor your application to serve your content through CloudFront. Here is a tutorial (there are plenty more online) on how to create a static website that serves content through CloudFront. In your case, you would be serving the content (i.e. your media files) through S3/CloudFront and then update the links used in your applications to retrieve that content from the CloudFront endpoint instead of retrieving them directly from your application/load balancer endpoint... so the work you need to do is on two fronts, setting up the S3/CloudFront environment, and configuring your application to offload the content to S3 and serve it through CloudFront.

Transferring between IaaS providers with only raw images of instances and volumes

My system runs on a cloud environment provided by an academic research computing organization. Problem is, it's unreliable, service is bad, and it can be slow.
So I'm considering switching to a different cloud provider, but I want to know whether it is possible to create a new compute instance and volume from just snapshots (raw images) of those instances and volumes? I asked DigitalOcean and they said this isn't possible (I'd have to create a new droplet and reinstall/transfer everything).
I've also emailed AWS but no response yet. If this is possible (just because it seems like the simplest route), are there any recommendations of cloud providers?
My system is running Ubuntu with Apache and MySQL. It hosts a wordpress website, a large database, and a series of Java tools. The instance snapshot is about 20gb and the storage volume is 250gb.
Thanks in advance!

FTP/SFTP access to an Amazon S3 Bucket [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Is there a way to connect to an Amazon S3 bucket with FTP or SFTP rather than the built-in Amazon file transfer interface in the AWS console? Seems odd that this isn't a readily available option.
There are three options.
You can use a native Amazon Managed SFTP service (aka AWS Transfer for SFTP), which is easier to set up.
Or you can mount the bucket to a file system on a Linux server and access the files using the SFTP as any other files on the server (which gives you greater control).
Or you can just use a (GUI) client that natively supports S3 protocol (what is free).
Managed SFTP Service
In your Amazon AWS Console, go to AWS Transfer for SFTP and create a new server.
In SFTP server page, add a new SFTP user (or users).
Permissions of users are governed by an associated AWS role in IAM service (for a quick start, you can use AmazonS3FullAccess policy).
The role must have a trust relationship to transfer.amazonaws.com.
For details, see my guide Setting up an SFTP access to Amazon S3.
Mounting Bucket to Linux Server
Just mount the bucket using s3fs file system (or similar) to a Linux server (e.g. Amazon EC2) and use the server's built-in SFTP server to access the bucket.
Install the s3fs
Add your security credentials in a form access-key-id:secret-access-key to /etc/passwd-s3fs
Add a bucket mounting entry to fstab:
<bucket> /mnt/<bucket> fuse.s3fs rw,nosuid,nodev,allow_other 0 0
For details, see my guide Setting up an SFTP access to Amazon S3.
Use S3 Client
Or use any free "FTP/SFTP client", that's also an "S3 client", and you do not have setup anything on server-side. For example, my WinSCP or Cyberduck.
WinSCP has even scripting and .NET/PowerShell interface, if you need to automate the transfers.
Update
S3 now offers a fully-managed SFTP Gateway Service for S3 that integrates with IAM and can be administered using aws-cli.
There are theoretical and practical reasons why this isn't a perfect solution, but it does work...
You can install an FTP/SFTP service (such as proftpd) on a linux server, either in EC2 or in your own data center... then mount a bucket into the filesystem where the ftp server is configured to chroot, using s3fs.
I have a client that serves content out of S3, and the content is provided to them by a 3rd party who only supports ftp pushes... so, with some hesitation (due to the impedance mismatch between S3 and an actual filesystem) but lacking the time to write a proper FTP/S3 gateway server software package (which I still intend to do one of these days), I proposed and deployed this solution for them several months ago and they have not reported any problems with the system.
As a bonus, since proftpd can chroot each user into their own home directory and "pretend" (as far as the user can tell) that files owned by the proftpd user are actually owned by the logged in user, this segregates each ftp user into a "subdirectory" of the bucket, and makes the other users' files inaccessible.
There is a problem with the default configuration, however.
Once you start to get a few tens or hundreds of files, the problem will manifest itself when you pull a directory listing, because ProFTPd will attempt to read the .ftpaccess files over, and over, and over again, and for each file in the directory, .ftpaccess is checked to see if the user should be allowed to view it.
You can disable this behavior in ProFTPd, but I would suggest that the most correct configuration is to configure additional options -o enable_noobj_cache -o stat_cache_expire=30 in s3fs:
-o stat_cache_expire (default is no expire)
specify expire time(seconds) for entries in the stat cache
Without this option, you'll make fewer requests to S3, but you also will not always reliably discover changes made to objects if external processes or other instances of s3fs are also modifying the objects in the bucket. The value "30" in my system was selected somewhat arbitrarily.
-o enable_noobj_cache (default is disable)
enable cache entries for the object which does not exist. s3fs always has to check whether file(or sub directory) exists under object(path) when s3fs does some command, since s3fs has recognized a directory which does not exist and has files or subdirectories under itself. It increases ListBucket request and makes performance bad. You can specify this option for performance, s3fs memorizes in stat cache that the object (file or directory) does not exist.
This option allows s3fs to remember that .ftpaccess wasn't there.
Unrelated to the performance issues that can arise with ProFTPd, which are resolved by the above changes, you also need to enable -o enable_content_md5 in s3fs.
-o enable_content_md5 (default is disable)
verifying uploaded data without multipart by content-md5 header. Enable to send "Content-MD5" header when uploading a object without multipart posting. If this option is enabled, it has some influences on a performance of s3fs when uploading small object. Because s3fs always checks MD5 when uploading large object, this option does not affect on large object.
This is an option which never should have been an option -- it should always be enabled, because not doing this bypasses a critical integrity check for only a negligible performance benefit. When an object is uploaded to S3 with a Content-MD5: header, S3 will validate the checksum and reject the object if it's corrupted in transit. However unlikely that might be, it seems short-sighted to disable this safety check.
Quotes are from the man page of s3fs. Grammatical errors are in the original text.
Answer from 2014 for the people who are down-voting me:
Well, S3 isn't FTP. There are lots and lots of clients that support S3, however.
Pretty much every notable FTP client on OS X has support, including Transmit and Cyberduck.
If you're on Windows, take a look at Cyberduck or CloudBerry.
Updated answer for 2019:
AWS has recently released the AWS Transfer for SFTP service, which may do what you're looking for.
Or spin Linux instance for SFTP Gateway in your AWS infrastructure that saves uploaded files to your Amazon S3 bucket.
Supported by Thorntech
Amazon has released SFTP services for S3, but they only do SFTP (not FTP or FTPES) and they can be cost prohibitive depending on your circumstances.
I'm the Founder of DocEvent.io, and we provide FTP/S Gateways for your S3 bucket without having to spin up servers or worry about infrastructure.
There are also other companies that provide a standalone FTP server that you pay by the month that can connect to an S3 bucket through the software configuration, for example brickftp.com.
Lastly there are also some AWS Marketplace apps that can help, here is a search link. Many of these spin up instances in your own infrastructure - this means you'll have to manage and upgrade the instances yourself which can be difficult to maintain and configure over time.
WinSCp now supports S3 protocol
First, make sure your AWS user with S3 access permissions has an “Access key ID” created. You also have to know the “Secret access key”. Access keys are created and managed on Users page of IAM Management Console.
Make sure New site node is selected.
On the New site node, select Amazon S3 protocol.
Enter your AWS user Access key ID and Secret access key
Save your site settings using the Save button.
Login using the Login button.
Filezilla just released a Pro version of their FTP client. It connects to S3 buckets in a streamlined FTP like experience. I use it myself (no affiliation whatsoever) and it works great.
As other posters have pointed out, there are some limitations with the AWS Transfer for SFTP service. You need to closely align requirements. For example, there are no quotas, whitelists/blacklists, file type limits, and non key based access requires external services. There is also a certain overhead relating to user management and IAM, which can get to be a pain at scale.
We have been running an SFTP S3 Proxy Gateway for about 5 years now for our customers. The core solution is wrapped in a collection of Docker services and deployed in whatever context is needed, even on-premise or local development servers. The use case for us is a little different as our solution is focused data processing and pipelines vs a file share. In a Salesforce example, a customer will use SFTP as the transport method sending email, purchase...data to an SFTP/S3 enpoint. This is mapped an object key on S3. Upon arrival, the data is picked up, processed, routed and loaded to a warehouse. We also have fairly significant auditing requirements for each transfer, something the Cloudwatch logs for AWS do not directly provide.
As other have mentioned, rolling your own is an option too. Using AWS Lightsail you can setup a cluster, say 4, of $10 2GB instances using either Route 53 or an ELB.
In general, it is great to see AWS offer this service and I expect it to mature over time. However, depending on your use case, alternative solutions may be a better fit.

No permanent filesystem for Heroku?

The app I am currently hosting on Heroku allows users to submit photos. Initially, I was thinking about storing those photos on the filesystem, as storing them in the database is apparently bad practice.
However, it seems there is no permanent filesystem on Heroku, only an ephemeral one. Is this true and, if so, what are my options with regards to storing photos and other files?
It is true. Heroku allows you to create cloud apps, but those cloud apps are not "permanent" - they are instances (or "slugs") that can be replicated multiple times on Amazon's EC2 (that's why scaling is so easy with Heroku). If you were to push a new version of your app, then the slug will be recompiled, and any files you had saved to the filesystem in the previous instance would be lost.
Your best bet (whether on Heroku or otherwise) is to save user submitted photos to a CDN. Since you are on Heroku, and Heroku uses AWS, I'd recommend Amazon S3, with optionally enabling CloudFront.
This is beneficial not only because it gets around Heroku's ephemeral "limitation", but also because a CDN is much faster, and will provide a better service for your webapp and experience for your users.
Depending on the technology you're using, your best bet is likely to stream the uploads to S3 (Amazon's storage service). You can interact with S3 with a client library to make it simple to post and retrieve the files. Boto is an example client library for Python - they exist for all popular languages.
Another thing to keep in mind is that Heroku file systems are not shared either. This means you'll have to be putting the file to S3 with the same application as the one handling the upload (instead of say, a worker process). If you can, try to load the upload into memory, never write it to disk and post directly to S3. This will increase the speed of your uploads.
Because Heroku is hosted on AWS, the streams to S3 happen at a very high speed. Keep that in mind when you're developing locally.

AWS Cloudfront with Webserver, RDS and Wowza Media Server

I am setting up a website that uses Amazon EC2 as Webserver, EBS to store the data of the website and another instance of Wowza for Video on Demand streaming and it seems hard to find answers for my questions:
If my webserver instance is terminated, am I loosing the apache settings/modules on my ec2 instance as well?
Can I have mysql on my webserver instance to save cost of RDS or is it a bad idea?
If I am using RDS for the database, is it deployed on edge locations also (like what cloudfront is doing)?
If have a Wowza Media server instance also running on ec2, can Wowza make use of Cloudfront as well, just so if somebody from somewhere in the world will get a VOD stream right from the next edge location?
Thanks!
If your instance is terminated, the instance and the EBS storage associated with that instance is lost. If you want to remove an instance without losing the state of that server, create an image (AMI) of the server before terminating it.
Depends on how much IT admin you want. The whole idea of Amazon is it takes this admin away from you for a cost higher than if you do it yourself. They do the backing up of databases and load balancing for you (that's the trickier part). That said, Amazon isn't fail-proof, you have to do backups outside of the Amazon system for everything, I've had instances crash and trash the disk, it does happen.
Their database instances are deployed in their main data center locations, which is different to their cloud distribution locations. Having the servers and databases in the same zone will save you network costs.
To use cloudfront you first create one (obviously) and then use the cloudfront domain instead of your own, the cloudfront cache maps through to your domain and caches at the edge location. If it the content is accessible from the server, it's possible to put a cloudfront in front of it. Note that cloudfront charges you not only slightly more than for traffic directly from your server, it also charges to get traffic from your server to the edge locations as instance traffic cost and you'll be charged per 10,000 requests aside from bandwidth (larger content works out cheaper per MB).
It's also possible to map your domain to the cloudfront URL if you want a pretty looking domain.

Resources