I want to move my magento site from AWS to Google and I want to make sure I'm doing it the right way as I am new with google cloud computing.
These are the steps I'm planning on doing:
create an instance and install redis and my magento store on it.
create sql for my DB
create a snapshop of this instance
create a template from this instance
create a group of instances with the template
create a load balancer and connect it with the instances group
is that the correct way to build a solid and fairly scalable magento site on GCC?
are there any services on google cloud I can use to make my store even more fast and scalable?
That's a fairly good way to deploy, but you can offload a few of those to managed services by GCP.
Use Click-To-Deploy solution for Magento (https://cloud.google.com/launcher/solution/bitnami-launchpad/magento?q=magento)
Launch another Click-To-Deploy solution for Redis (https://cloud.google.com/launcher/solution/bitnami-launchpad/redis?q=redis)
Launch a Cloud SQL instance (https://cloud.google.com/sql/)
Update your Magento instance with the configuration for these servers
Use this as a template to launch instances-group
Put this groups behind a load balancer
Why is this better?
You don't have to manage your SQL DB security and scaling
You get redis and magento using simple clicks, saves a lot of time
All you need to manage are your settings. Even if you wanted to update your magento to newer upgrades on better servers
Bonus: You should also make use of a CDN for your static resources and Cloud CDN (https://cloud.google.com/cdn/) will be helpful there.
Further Read: Go through this to get a sense of what else can you do with GCP (https://cloud.google.com/solutions/commerce/)
Related
What is the recommended way to deploy changes (for example change in some Content Type model) from development to production without downtime?
I’m using this setup.
I have development instance with development postgres database.
On production I have 3 strapi instances (serving both api & admin, using the same production postgres database) and those instances are behind loadbalancer.
Lets say that I have Content Type named: Article (both on development and deployed on production).
Lets assume that I want to change that content type for example I want to add some fields and remove some fields in Article content type.
How to deploy changes to production without downtime?
I’ve done some tests and when I for example update Strapi Production Instance #1 to pull new code for updated models, strapi will update database of course. And from that time Strapi Production Instance #2 and #3 have problems serving Admin panel for example (javascript errors because database was changes but JS model files are not updated).
After I updated code on instance #2 and #3 everything works as expected.
But doing something like this on “working product” will be visible as downtime.
How to properly handle this situation? Thanks for help!
Could PM2 solve this problem? Strapi mentiones this in their documentation
PM2 Runtime allows you to keep your Strapi project alive and to reload it without downtime.
Strapi Docs v4
I have installed a Joomla site with CF on bluemix.
As you know Joomla as other CMS allows to install components for adding functionalities.
This uploads the php code needed for the component and add additional tables/entries in the Database.
My issue is that when I CF PUSH, the new component script is removed from the joomla folders on bluemix, and the database still contains component's tables/entries.
I guess this is the situation for all CMS (Drupal, Wordpress, Joomla, Vbulletin, etc..).
How could I get a kind of CF PULL (?) to keep the modified CMS code including the new component locally on the computer side ?
So when i will redo the CF PUSH the installed component will not be erased.
Thank you in advance for your support,
Best regards
Yves
There is no cf pull command in Cloud Foundry. The closest you would have is the cf files app-name command that you can navigate the directory structure of your cloud application and get specific files as needed, but this would be really tedious if you have multiple files to copy to your local computer.
It looks like Joomla fits better with the IBM Containers service in Bluemix. With the IBM Containers you can have an Docker image from Joomla (https://hub.docker.com/_/joomla/) and use persistent Volumes to save your added functionality. You can also use any Bluemix services (like a database) with IBM Containers.
The article below provides more details and step by step instructions to create an IBM Container for Wordpress. You can easily modify it for Joomla:
http://blog.ibmjstart.net/2015/05/22/wordpress-on-bluemix-containers/
When you push an application on a runtime, php Java or whatever, it will restage all the application sources, included what has been configured and modified before through the cms interface, leaving the db databases untouched. And it is for joomla, but also for drupal or WP or any other cms. By this way to achieve what you wish you have 3 options:
- push exactly the filesystem structure you need on Bluemix, including the configuration files and modules to use on it
- use (as suggested above) a container instead of a runtime: anyway also with a container you have to install your cms on an external docker volume, otherwise the cms will be reset every time you restart the container
- use a Bluemix VM
I have just entered into the world of e-commerce, and I am trying to get my Magento website up and running.
I am using AWS cloud for hosting my website. I am trying to use an architecture, where I can run multiple servers connected to a single DB server. Specifically, I want to use an AWS Auto scaling group along with ELB to start multiple EC2 instances, during high load. There is only one Mutli AZ RDS Database instance.
As initial trials, I tried creating 2 ec2 instances, and installed magento on both of them. I have used same RDS DB for both of them. But as is turns out, magento stores the base url of webserver in the database itself. Which means, I can only store one base url of magento website running one particular server.
To be precise magento stores base url in table core_config_data in column 'path' where row values ares "web/unsecure/base_url" and "web/secure/base_url", and the column 'value' for corresponding row specifies the url address of magento installed web server.
My question is how can I use multiple server using EC2 auto scaling, if magento permits only one server address in the base url.
Here's a partial view of the table with 2 rows -
config_id scope scope_id path value
5 default 0 web/unsecure/base_url http://server1.com/magento/
6 default 0 web/secure/base_url http://server1.com/magento/
Are there any other known methods to somehow use horizontal scaling during heavy load conditions in Magento.
I don't think load balancing works like that.
You need a load balancer that receives the requested URL and then passes it off to one of the servers running Magento - so I think you would pass the same url to both servers anyway, no?. I do not know how to do it.
You are trying to set up a very complicated system.
You could look to override some functions if you want to have different values for secure and non-secure urls. Try reading this code to get you started:
//file app/code/core/Mage/Core/Model/Store.php
//class Mage_Core_Model_Store
//function getBaseUrl()
//function getDistroServerVars()
//file app/code/core/Mage/Core/Model/Url.php
//class Mage_Core_Model_Url
//function getBaseUrl()
//file app/code/core/Mage/Core/Controller/Request/Http.php
//class Mage_Core_Model_Http
//function - I don't know, any of them, none of them
and look at any files with the string 'substDistroServerVars' in them or isDirectAccessFrontendName might expose something. getDistroServerVars is discussed at the end of this great article by the almighty Alan Storm.
But I don't think that is the real answer - for the real answer skip to the links at the end of this tedious monologue.
If this is your first foray into Magento and you really think you are going to get the volume of traffic into your shop that requires load balancing over two servers then you can afford, *must afford**, third party hosting and get professionals with many many many man years of experience running Magento on heavy loads across multiple servers. You will also want to hand off (at least) the images to a CDN.
*I mean, if your shop has that high a volume then it has a high revenue and you should invest that revenue in keeping your shop running: professional hosting with 24/7 support. Otherwise downtime will be expensive and a long implementation will mean lost revenue.
*If you are just trying this out for fun and to learn something about setting up Magento on multiple servers then I recommend two things:
1) Practice getting Magento running on one server first - and optimsing for volume there (caching, compilers, DB tuning, log file analysis, flat tables, cron jobs, CDNs, possibly combined JS and CSS, web server tuning and getting the headers right, possibly a full page cache and a sprinkling of Redis) - because that isn't a trivial list on one server never mind two + DB server and ELB.
And 2) practice getting Apache or nginx to serve load balanced content with your ecommerce SSL certificate in place. Only then should you try to join the two systems. Be prepared to spend many months on this - including figuring out Seige, AB or jmeter for simulated load testing.
But if you really want to get the AWS ELB set up here are a few excellent resources to get you started - particularly the detailed tutorial by Adrian Duke (first link) - pay great attention to the details in the last section of that article subtitled 'Magento', that may be the answer to your question.
Getting and scaling Magento in the cloud by Adrian Duke
Using AWS Auto Scaling with an Elastic Load Balancer cluster on EC2 (actually a WordPress install, not Magento, but Mr Shroder knows his Magento)
Running Magento in an AWS Environment (All hail Alan Storm)
I've had a rather large amount of success modifying the Magneto to be a beanstalk package. The steps (loosely) were:
Install GIT locally
Install AWS Command line tools
Install AWS Beanstlalk Command Line
Build module to upload image to s3 everytime it's uploaded in magento
Utilize OnePica's Magneto Extension
Use Amazon's REDIS Cache for caching data
Use RDS for database
Use Route53 for routing &
Use cloudfront for image, js & CSS Distro
Couple of drawbacks to AWS
Customizing magneto to look for things is a pain in the ass. As we speak I'm trying to get it to keep sessions persistent between EC2 instances as the loadbalancer chops it up.
Everytime you need to modify Magento in any way it's a git commit, (then we test locally, via a seperate beanstalk instance) and then push to production.
Short of that it's been fairly stable. We're not hitting high numbers yet, though.
Normally you put a load balancer in front of the nodes to distribute the load and each node is configured to use the same base_url. MySQL replication can be used if you want multiple db servers but I have never found the need to do this. Not used amazon ec2 with magento but have similar setup in a dedicated server environment with two nodes, one db server, load balancer, and shared media.
Diagram here is useful, especially with the shared storage for media, your going to need to do something like this. http://www.severalnines.com/blog/how-cluster-magento-nginx-and-mysql-multiple-servers-high-availability
Also, I found amazon seems to provide Elastic Load Balancing which is what your after I think. http://aws.amazon.com/documentation/elasticloadbalancing/
I have an AWS account with 14 instances and using scalr. I added the Api reference details and it showed up, at that time instances were pretty low. As and when I keep adding new instances it accepted few and reject the rest. Now I have an instance newly made on AWS which is not getting loaded in scalr.
Any ideas?
Instances that you create using AWS will not show up in Scalr.
Instead, you create Farms (in Scalr) through the use of custom and/or pre-configured Scalr Roles. When you launch those farms/roles, it will launch the required instances in AWS. It's like a wrapper around AWS that provides extra features, but it will only ever know about instances that have been launched from a Scalr role.
It is possible to import an existing server into Scalr although it involves installing the scalarizr software onto that server and opening some ports. Full details can be found here. Once complete, you'll have a new role that you can add to a farm and then launch.
I have a web application developed with Struts2, JSP, JPA, Spring and MySql. I want to move this application to Amazon Cloud. I have not done a cloud deployment before or know how to do it.
Can anyone help me on a step by step process or a procedure to follow or a document that will guide me in doing this. Thanks for your help.
Upload your project's .war in elastic bean stalk and deploy project.
The steps to create a new application in beanstalk is -
1) Create a new application say "test app" in Elastic beanstalk, chose the region which best suits your requirement.
2) Create a new environment in the application "test app", select the application server you like to have i.e, tomcat 6 32/64 or tomcat732/64.
3) upload the .war in the newly created environment.
4) You can provide a custom Cname through which you can access you webapplication from browser.
5) Finally based on your requirements you can set the healthcheck status time interval, scaling unit
Got it... thanks for the detailed description.
You can make it in two ways
Create a singleBeanstalk application.
create different environments for each company within the created application, and in every environment deploy the .war file and provide the resources as per your requirement such as tomcat 6/7, minimum number of instances & maximum number of instances for Auto scaling. Health check monitor interval, no. of times to check before timeout etc.,. and finally assign the cname (i.e, the url by which you access the application) associated with the company name, like if the webapp is for xyz company then provide cname as xyz.elasticbeanstalk.com.
2 . Create multiple Beanstalk applications i.e., one for every company and in each application you can create multiple environments like Development, Beta, Staging and live - based on your requirement.
And coming to DB
Go for RDS if your DB is relational DB. Two ways to plan for multiple company's is
1) Create a single RDS and create multiple schema's in it i.e., one schema for one organization.
2) Create separate RDS for every organization - recommended if DB records are more
Let me know if you have any queries.
Happy to help...:)
please find me inline comments in bold.
Currently, the application is installed on a company's server, and users from the company that will use the application are created.
How is the installation done, and what is the architecture(x86/x64) and platform(windows
server/linux) of the server
The application knows how to manage its users. So every company that needs this application, buys a server and the application is deployed on the server.
Buys a server in the sense - you guys are providing the application and they are
launching in their server, i mean in their own infrastructure.
The facts i understood from your reply is, that you guys provide a web application to
different company's. And those company's deploy your webapplication in their
application server and DB in their DB server.
Correct me if am wrong