Autoscale Magento in the cloud - magento

I have just entered into the world of e-commerce, and I am trying to get my Magento website up and running.
I am using AWS cloud for hosting my website. I am trying to use an architecture, where I can run multiple servers connected to a single DB server. Specifically, I want to use an AWS Auto scaling group along with ELB to start multiple EC2 instances, during high load. There is only one Mutli AZ RDS Database instance.
As initial trials, I tried creating 2 ec2 instances, and installed magento on both of them. I have used same RDS DB for both of them. But as is turns out, magento stores the base url of webserver in the database itself. Which means, I can only store one base url of magento website running one particular server.
To be precise magento stores base url in table core_config_data in column 'path' where row values ares "web/unsecure/base_url" and "web/secure/base_url", and the column 'value' for corresponding row specifies the url address of magento installed web server.
My question is how can I use multiple server using EC2 auto scaling, if magento permits only one server address in the base url.
Here's a partial view of the table with 2 rows -
config_id scope scope_id path value
5 default 0 web/unsecure/base_url http://server1.com/magento/
6 default 0 web/secure/base_url http://server1.com/magento/
Are there any other known methods to somehow use horizontal scaling during heavy load conditions in Magento.

I don't think load balancing works like that.
You need a load balancer that receives the requested URL and then passes it off to one of the servers running Magento - so I think you would pass the same url to both servers anyway, no?. I do not know how to do it.
You are trying to set up a very complicated system.
You could look to override some functions if you want to have different values for secure and non-secure urls. Try reading this code to get you started:
//file app/code/core/Mage/Core/Model/Store.php
//class Mage_Core_Model_Store
//function getBaseUrl()
//function getDistroServerVars()
//file app/code/core/Mage/Core/Model/Url.php
//class Mage_Core_Model_Url
//function getBaseUrl()
//file app/code/core/Mage/Core/Controller/Request/Http.php
//class Mage_Core_Model_Http
//function - I don't know, any of them, none of them
and look at any files with the string 'substDistroServerVars' in them or isDirectAccessFrontendName might expose something. getDistroServerVars is discussed at the end of this great article by the almighty Alan Storm.
But I don't think that is the real answer - for the real answer skip to the links at the end of this tedious monologue.
If this is your first foray into Magento and you really think you are going to get the volume of traffic into your shop that requires load balancing over two servers then you can afford, *must afford**, third party hosting and get professionals with many many many man years of experience running Magento on heavy loads across multiple servers. You will also want to hand off (at least) the images to a CDN.
*I mean, if your shop has that high a volume then it has a high revenue and you should invest that revenue in keeping your shop running: professional hosting with 24/7 support. Otherwise downtime will be expensive and a long implementation will mean lost revenue.
*If you are just trying this out for fun and to learn something about setting up Magento on multiple servers then I recommend two things:
1) Practice getting Magento running on one server first - and optimsing for volume there (caching, compilers, DB tuning, log file analysis, flat tables, cron jobs, CDNs, possibly combined JS and CSS, web server tuning and getting the headers right, possibly a full page cache and a sprinkling of Redis) - because that isn't a trivial list on one server never mind two + DB server and ELB.
And 2) practice getting Apache or nginx to serve load balanced content with your ecommerce SSL certificate in place. Only then should you try to join the two systems. Be prepared to spend many months on this - including figuring out Seige, AB or jmeter for simulated load testing.
But if you really want to get the AWS ELB set up here are a few excellent resources to get you started - particularly the detailed tutorial by Adrian Duke (first link) - pay great attention to the details in the last section of that article subtitled 'Magento', that may be the answer to your question.
Getting and scaling Magento in the cloud by Adrian Duke
Using AWS Auto Scaling with an Elastic Load Balancer cluster on EC2 (actually a WordPress install, not Magento, but Mr Shroder knows his Magento)
Running Magento in an AWS Environment (All hail Alan Storm)

I've had a rather large amount of success modifying the Magneto to be a beanstalk package. The steps (loosely) were:
Install GIT locally
Install AWS Command line tools
Install AWS Beanstlalk Command Line
Build module to upload image to s3 everytime it's uploaded in magento
Utilize OnePica's Magneto Extension
Use Amazon's REDIS Cache for caching data
Use RDS for database
Use Route53 for routing &
Use cloudfront for image, js & CSS Distro
Couple of drawbacks to AWS
Customizing magneto to look for things is a pain in the ass. As we speak I'm trying to get it to keep sessions persistent between EC2 instances as the loadbalancer chops it up.
Everytime you need to modify Magento in any way it's a git commit, (then we test locally, via a seperate beanstalk instance) and then push to production.
Short of that it's been fairly stable. We're not hitting high numbers yet, though.

Normally you put a load balancer in front of the nodes to distribute the load and each node is configured to use the same base_url. MySQL replication can be used if you want multiple db servers but I have never found the need to do this. Not used amazon ec2 with magento but have similar setup in a dedicated server environment with two nodes, one db server, load balancer, and shared media.
Diagram here is useful, especially with the shared storage for media, your going to need to do something like this. http://www.severalnines.com/blog/how-cluster-magento-nginx-and-mysql-multiple-servers-high-availability
Also, I found amazon seems to provide Elastic Load Balancing which is what your after I think. http://aws.amazon.com/documentation/elasticloadbalancing/

Related

Laravel App on AWS Elastic Beanstalk: How can I use a different database (on the same RDS) according to subdomain?

We are running a B2B business and we have a Laravel (v8) app running on AWS.
Before, I created different environments for different customer companies.
One subdomain for each customer company and running on different environment as the configuration described here:
AWS: Pointing sub-domains to different elastic beanstalk environments
In the beginning it seemed perfect. But then, managing git branches, merges and then deployments on several machines proved to be too difficult to maintain, furthermore, too expensive.
So, I decided to point all the subdomains to the same environment for production.
BUT, ALSO I don't want that data of customer companies get mixed together; I want to keep them separate, even if on the same database server.
SO, FINALLY I want that my laravel app uses a different database (on the same RDS instance) according to subdomain.
1. Does it make sense?
2. How can I do it?

deploying magento on google cloud computing

I want to move my magento site from AWS to Google and I want to make sure I'm doing it the right way as I am new with google cloud computing.
These are the steps I'm planning on doing:
create an instance and install redis and my magento store on it.
create sql for my DB
create a snapshop of this instance
create a template from this instance
create a group of instances with the template
create a load balancer and connect it with the instances group
is that the correct way to build a solid and fairly scalable magento site on GCC?
are there any services on google cloud I can use to make my store even more fast and scalable?
That's a fairly good way to deploy, but you can offload a few of those to managed services by GCP.
Use Click-To-Deploy solution for Magento (https://cloud.google.com/launcher/solution/bitnami-launchpad/magento?q=magento)
Launch another Click-To-Deploy solution for Redis (https://cloud.google.com/launcher/solution/bitnami-launchpad/redis?q=redis)
Launch a Cloud SQL instance (https://cloud.google.com/sql/)
Update your Magento instance with the configuration for these servers
Use this as a template to launch instances-group
Put this groups behind a load balancer
Why is this better?
You don't have to manage your SQL DB security and scaling
You get redis and magento using simple clicks, saves a lot of time
All you need to manage are your settings. Even if you wanted to update your magento to newer upgrades on better servers
Bonus: You should also make use of a CDN for your static resources and Cloud CDN (https://cloud.google.com/cdn/) will be helpful there.
Further Read: Go through this to get a sense of what else can you do with GCP (https://cloud.google.com/solutions/commerce/)

Problems flushing Magento Redis Cache on an installation with a separate backend server

My problem is that I do not think I am able to refresh the magento redis cache from the admin page. I realize that the problem could come from many sources, but my gut tells me it has something to do with the backend being on a separate server. My magento installation is as follows:
Magento CE 1.8
Backend server and NFS(media) on an Amazon AWS EC2 at
http://admin.example.com
Database on AWS RDS MySQL 2 app servers
(scalable to more) on AWS Elastic Beanstalk at http://www.example.com
(route53)
regular backend cache(database 0), Lesti-FPC(database 0), and
redis_session (database 1) on AWS elasticache redis
I originally had my Lesti-FPC configured to use database 2 on the redis cache. I thought it worked pretty well as far as I could tell, until I realized that I couldn't flush the cache at all from the admin System>Cache Management page. "Flush Magento Cache," "Flush Cache Storage," "disable", and "refresh" did nothing. I could only flush it by rebooting the redis node or going in with redis-cli and using redis commands.
I then tried configuring Lesti-FPC to use database 0 as described above. It worked better. Now, I could flush the FPC with "Flush Cache Storage," although the other options still didn't work. At the time, I assumed it was an issue specifically with Lesti-FPC. But anyway, using "Flush Cache Storage" was good enough for me at the time, especially once I discovered that I could flush the cache through code using
Mage::app()->getCacheInstance()->flush();
I just recently found out that the problem may not be specific to Lesti-FPC. While trying to fix the Lesti issue, I tried monitoring redis. I know nothing about redis or caching, but when I would try to refresh the FPC, I would see commands like:
“del” “zc:ti:403_FPC”
“srem” “zc:tags” “403_FPC”
But those tags never existed. Doing:
keys *FPC*
in redis would give me
“zc:ti:109_FPC”
but nothing with 403. SO this means that my fpc caches do not get invalidated like they're supposed to after product/category changes and reindexing. I got around this by manually flushing the cache after changes and running cron jobs to flush and prime the fpc every few hours.
But it made me suspicious. I tried refreshing the other caches from the admin, and I found that magento would always try to delete and read the 403 keys(some of which existed and some of which did not) but never any 109 keys (of which there are many).
My guess is that the 403 keys are specific to the admin server, and the 109 keys are specific to the app servers. The admin server, maybe because it is on a different subdomain, is not touching the app servers' cached stuff. But the app servers are able to find its own keys fine, as demonstrated by the fact that the FPC is working very well.
Does this make sense? Is there something I could do to fix this? Did I configure something incorrectly or is this a magento bug?
It turns out that the Zend cache prefix is the first three characters of an md5 hash of the path to your etc folder.
My app server has its document root at /var/www/html. The full path of /var/www/html/app/etc gives an id of 403. The app servers running on elastic beanstalk have their document roots at /var/app/current which is done automatically at deployment.
It seems pretty dumb. Why not a hash of the database address and database name or something? That would make more sense.
Anyway, I hope this helps someone.

Amazon AWS EC2 - Elastic IP - can i mirror my site closing certain ports?

THE MISSION:
I have a development environment running on an Amazon AWS EC2 virtual server which i want to have tested by third parties.
THE PROBLEM:
I do NOT trust the companies who will test it not to sabotage environment and / or steal code. Therefore, i don't want them to know URL's, permanent IP's or even to access the web pages, which they could eventually use a crawler to find.
My environment includes web applications and socket servers. I do NOT want to expose the web applications, while giving access only to socket servers.
THE CONCEPT:
I have opted to use a secondary, impermanent Elastic IP pointing to the environment. this IP will be destroyed after 1 or 2 days, after basic tests have run. Subject to change (depending on suggestions from this thread).
THE QUESTION:
Can i create a secondary Elastic IP instance that allows access only to ports 5000-5100? If so, how?
THE ALTERNATIVE: In case this is not the most efficient procedure, what alternative would you propose?
MY SOLUTIONS: followed FAQ Launching Instance From Backup
create snapshot
create image from snapshot (snapshot menu - create image tag)
instances - launch instance
choose image created from snapshot as your root volume
edit security groups (opened port range for sockets only, no web)
deleted all web code from this instance
after 2 days, will delete instance
followed Create Image From, Instance
select (exclusively) running instance you wish to mirror
right click on selected instance
choose create image from dropdown
to 7. same as above
this second solution seems to be more stable (especially re: status check and connectivity issues).
any better solutions? thanx!

Magento Multiwebsite With Single Database & Single Admin

Friends
Can i create multiple web store with single admin and database *in Magento*. i want to host these stores from different servers. In other words my database and code (different code for different site) would be on different servers.
for ex: i have abc.com and xyz.com stores created in Magento hosted on different servers but the database is same.
i have already created multi-store concept in Magento with Magento's default functionality but in that case we can't host different website from different server because there is a single code for both of the site or store
why i wanted to use this concept just because of security purpose and in case of server crash there will be no harm to other web store because they are hosted from different servers
Thanks....
Haven't seen such scheme before but I don't think this will work.
From the implementation's view, it requires a dedicated empty database in which the tables will be created as part of the installation process. So when you install a second Magento on the same database, all tables written by the previous Magento installation shall be erased.

Resources