I'm currently using a vps plan at vpsdime.com as my development server. I move a lot and use different computers so didn't want to develop locally.
Soon, I'll be able to launch my webapp (approx 5-10 users to start with). Should I simply install my production app on my same vps server, or would you advise to get another server? Why?
You can safely use the same server. Just make sure everything as separated per environment:
Different Redis database
Different MySQL database
Different Elasticsearch server
Different location to store session data
Different caching location
Different queues (Redis/Beanstalk, ...)
Different AWS bucket
Different ... you get the gist.
It should be straightforward to setup different vhosts with Apache or Nginx.
Related
I have hosted my application on AWS cloud and a load balancer is running on top of two instances which is being served by Nginx on top of Php7.0-fpm. Let's say that my application downloads a file and stores it locally, so that the contents can be served to the customers. With an auto scaling group, configured for two instances;
1) If my session begins with instance-1 where my file gets downloaded, and suddenly switches over to instance-2, will I be getting the same content?
Or
2) If a session is created on a single instance, will the same instance be used until I log out of my application?
Any help is much appreciated!!
For a website with more than 1 instance, which is load balanced, it is highly recommended that you store cache and sessions in 1 place and not multiple across them. For this, you can install memcached on all servers and configure them to point them to 1 server to store it all.
SESSION_DRIVER=memcached
CACHE_DRIVER= memcached
MEMCACHED_HOST=127.0.0.1 #on your memcache server, point to localhost
MEMCACHED_HOST=10.10.1.10 #on other instances, point to memcache server
MEMCACHED_PORT=11211
For file and image uploads, use S3 from AWS or dedicated storage server with FTP to store so that all the servers can access it directly and the same way. Easiest and most efficient :)
If you store them locally, your servers won't be synced with the same content, and your users will end up with 404s.
I would like to use sphinxsearch on our site which is hosted on an auto-scaled load-ballanced server farm with 1LB, 2DB, 2APP,& 1 memcached servers.
With using sphinx to search a site with over million posts (forum site), is any of these ideas a recommended way to setup sphinxsearch.
a: Setup a extra server (or put it on the memcache instance) and have results from the app servers pull from that server.
b: setup sphinxsearch on the app servers and find a way to replicated the index
c: what ever other idea you can think of?
A) Try putting it on separate server first, if it will not take a lot of resources you can move it to memcache server.
B) Replicating indexes would actually be rsyncing them? If yes you would need to restart search daemon after every sync so I would not suggest it
I would go with A
I have an application that we are currently running on a number of co-located servers and I'm interested in moving everything to the cloud.
I have a legacy application running Postgres and its replacement application using MySql as its data store. I'm interested in moving to EC2 and looking to do this as pain free as possible. I was planning on using Amazon RDS for the MySql data store but am looking for options for the Postgres install.
I know that Heroku is built on top of EC2 and has Postres support and was wondering
Has anyone had any experience accessing a Heroku Postgres database from an application running in EC2. Comments on Performance, Reliability ease of Administration
The other alternative is to install Postgres on EC2 with EBS volumes but I've heard mixed reviews on performance, reliablitity and ease of administration.
Thanks in advance, any experience and suggestions would be greatly appreciated.
I've done this with several colocated boxes on the east coast. Heroku actually has a completely independent service: Heroku Postgres, which is built for this specific use case. The databases you create are all independent (not related to any Heroku apps).
I've been trying to get to grips with Amazons AWS services for a client. As is evidenced by the very n00bish question(s) I'm about to ask I'm having a little trouble wrapping my head round some very basic things:
a) I've played around with a few instances and managed to get LAMP working just fine, the problem I'm having is that the code I place in /var/www doesn't seem to be shared across those machines. What do I have to do to achieve this? I was thinking of a shared EBS volume and changing Apaches document root?
b) Furthermore what is the best way to upload code and assets to an EBS/S3 volume? Should I setup an instance to handle FTP to the aforementioned shared volume?
c) Finally I have a basic plan for the setup that I wanted to run by someone that actually knows what they are talking about:
DNS pointing to Load Balancer (AWS Elastic Beanstalk)
Load Balancer managing multiple AWS EC2 instances.
EC2 instances sharing code from a single EBS store.
An RDS instance to handle database queries.
Cloud Front to serve assets directly to the user.
Thanks,
Rich.
Edit: My Solution for anyone that comes across this on google.
Please note that my setup is not finished yet and the bash scripts I'm providing in this explanation are probably not very good as even though I'm very comfortable with the command line I have no experience of scripting in bash. However, it should at least show you how my setup works in theory.
All AMIs are Ubuntu Maverick i386 from Alestic.
I have two AMI Snapshots:
Master
Users
git - Very limited access runs git-shell so can't be accessed via SSH but hosts a git repository which can be pushed to or pulled from.
ubuntu - Default SSH account, used to administer server and deploy code.
Services
Simple git repository hosting via ssh.
Apache and PHP, databases are hosted on Amazon RDS
Slave
Services
Apache and PHP, databases are hosted on Amazon RDS
Right now (this will change) this is how deploy code to my servers:
Merge changes to master branch on local machine.
Stop all slave instances.
Use Git to push the master branch to the master server.
Login to ubuntu user via SSH on master server and run script which does the following:
Exports (git-archive) code from local repository to folder.
Compresses folder and uploads backup of code to S3 with timestamp attached to the file name.
Replaces code in /var/www/ with folder and gives appropriate permissions.
Removes exported folder from home directory but leaves compressed file intact with containing the latest code.
5 Start all slave instances. On startup they run a script:
Apache does not start until it's triggered.
Use scp (Secure copy) to copy latest compressed code from master to /tmp/www
Extract code and replace /var/www/ and give appropriate permissions.
Start Apache.
I would provide code examples but they are very incomplete and I need more time. I also want to get all my assets (css/js/img) being automatically being pushed to s3 so they can be distibutes to clients via CloudFront.
EBS is like a harddrive you can attach to one instance, basically a 1:1 mapping. S3 is the only shared storage stuff in AWS, otherwise you will need to setup an NFS server or similar.
What you can do is put all your php files on s3 and then sync them down to a new instance when you start it.
I would recommend bundling a custom AMI with everything you need installed (apache, php, etc) and setup a cron job to sync php files from s3 to your document root. Your workflow would be, upload files to s3, let server cron sync files.
The rest of your setup seems pretty standard.
Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question:
I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7.
It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data.
This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow?
My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..
If you use an EBS volume with your EC2 instance, you can mount/dismount them to have persistent storage. You can also use Amazon RDS to handle your database too which is handy (but can be slightly on the pricier side.)
So a way to think of it is:
Your EC2 instance: Get the OS set up exactly like you'd like it along with your web application - basically, get your static stuff all in place.
EBS volume: That can be mounted and can be used for things like user uploads.
RDS instance: This is a dedicated database server with no hassles. It's nice - I use a MySQL RDS and it automatically makes two daily backups, and is scalable like EC2 instances.
Amazon Web Service is a better approach at hosting your applications Jon. You have a basic understand of AWS but you need to know that you can also launch an instance that is persistent. Just launch an instance of a persistence AMI. Also you can install you database,webs server on the instance like a regular server. There is probably just minimal differences from running an Ec2 instance and a dedicated server. If you have any other questions you can contact me.