Using google cloud only for database - performance

I am having a large website that struggling a bit, uptime not great and speed as well, and there is a lot of load on it. I am thinking of moving into google cloud but I don't have time to manage the server and become the host.
So my idea is to just serve the database from google cloud (so I can benefit from the auto-scale) and leave the website files where they are now.
My question is: Will that put less load on the cpu? and will it eventually improve the website uptime?
Thanks

According to your question, I think yes it will help you to improve website performance but you will see a big jump in the performance because of the database utilize more CPU and ram in the server and when you provide a separate machine for the database it will increase performance but if you want to decrease website loading time then there are other services which I suggest you services like [Cloudflare][1] or any CDN it will help you and you can use web server optimization techniques.
You can use Google CloudSQL Service if you are using MySQL or Postgres Database. Else you can use Google Compute Engine VM which you have to manage. If you want a complete website auto-scaling option I would suggest you can go with Google App Engine by which you can easily do auto-scale companies like many well-funded startups.
https://www.cloudflare.com/
https://cloud.google.com/sql/docs
https://cloud.google.com/compute/pricing

If you want to move your data to GCP, I highly recommend using Cloud SQL. If budget is not an issue, auto-scaling would be advantageous.
Will that put less load on the cpu?
Most likely it will, since your taking away the database processes on your server.
You may also want to look into using Google App Engine connected to Cloud Sql in the same region will have less latency.

Related

Slow MEAN Stack Web applications

I have three MEAN Stack built web applications hosted on a shared hosting plan. It's running really slow (takes minutes to login and minutes to call the database) and I'm not sure how to optimise the performance. I have created three backend servers so that each application can can call the backend separately. I have ensured that my files are gzipped and are on HTTP3. What else should/can I do on top on that? I can't seem to find much related information online. Please give me any suggestions that you may have!
Would implementing lazy loading help? If so, please share some easy examples because I'm still new. Much appreciated!
I'd suggest moving off of shared hosting and using one of the newer generation developer-focused hosting platforms like Render or Adaptable.io. Adaptable includes MongoDB, so it's great for MEAN stack. With Render, you'd probably use MongoDB Atlas. Both provide free tiers that smaller apps can fit within.
With any of the next-gen hosting platforms, you just connect a GitHub repo with your source code and they automatically deploy your app to the cloud. You don't have to deal with keeping servers up to date, optimizing database performance or anything like that.

Where to store files (pictures & vids) for my website?

I'm a newbie web developer and I have a basic question regarding my Laravel based website: Where should I put my files? I know there are services like Amazon S3, but firstly I don't know how to work with them, and second they are NOT FREE.
There is going to be a fairly large amount of data including pics and videos (around 10 GB).where should I store them? And how should I use Laravel to allow users to upload files?
If it will be a bigger project, you should use a cloud service. This is going to be the future of backend development as it is making your project much easier and faster to mantain and run. If you want to make your own backend, this will take a long time to get it done, since you have to learn a lot of new things and should be good at it. There would be many key aspects you have to be aware of. Like securitiy, scaling, performance and so on ... Like you suggested Amazon AWS or imo much better Google Firebase. I think Google Firebase should be your pick because it is really easy do understand and has a great documentation. Next to the storing service (Google Cloud Storage) there are many several services you could use in the future like analytics, machine learning or nosql databases. And the good thing is that you can connect them all together.
With Google Firebase you have a Free Spark Plan which is completely free with some limitations. And if you scale to many users you can upgrade to the other plans, which is not very expensive. Don't forget that your own Back-End would cost you time and also money for the electricity and hardware cost.
If you have more questions be free to ask me :)

how to make website, use distributed file system- hadoop for data management

I am naive in big data technology, and have curiosity to relate it with the conventional application development.
The conventional way to develop any web application is to have a hosting server (or application server) and a database to manage the data.
But lets say, I have a huge data set which is generated by the website, (i.e. GBs per second), then the website will fall into the category of managing big data.
lets suppose, I have a cluster of 20 computers, with 200GB of hard drives and core i3 processor. So now I will have enough processing and storage power for the website. (of-course hadoop is scalable too, if I need more resources).
how to setup application server, to host the website in this cluster ?
will I need load balancers for my application server since there is higher velocity, of http request to the application server?
can anyone please guide !
thanks in advance.
EDIT:
I just wanted to take an overview idea of how web application development takes place in relation with big data. Let's imagine Facebook. It is basically a web application. How application servers and database management is done, for Facebook is my curiosity.
As it is a fact that such a big company like Facebook, will have to use distributed system. E.g. hadoop clusters. And my question relates with the same concept. But Facebook has huge clusters, and to understand the way it has been implemented is tough, in my question I mentioned cluster of 20 computers. If someone has experience in setting up the hadoop clusters for web application hosting, then I would request to share the knowledge
I don't know much about Hadoop, but if I were going to make a web site I would use Visual Studio.
https://msdn.microsoft.com/en-us/library/k4cbh4dh.aspx?f=255&MSPPError=-2147217396
https://www.youtube.com/watch?v=GIRmPB0xshw
Visual Studio Express is free and very easy to use.

What's the speediest web hosting choices out there that are scalable to large traffic spikes and can handle fast page loads?

Is cloud hosting the way to go? Or is there something better that delivers fast page loads?
The reason I ask is because I run a buddypress site on a bluehost dedicated server, but it seems to run slow at most times of the day. This scares me because at the moment the sites not live and I'm afraid when it gets traffic it'll become worse and my visitors will lose interest. I use Amazon Cloud to handle all my media, JS, and CSS files along with a catching plugin, but it still loads slow at times.
I feel like the problem is Bluehost, because I visit other sites running buddypress and their sites seem to load instantly. Im not web hosting savvy so can someone please point me in the right direction here?
The hosting choice depends on many factors such as technical requirements, growth rates, burst rates, budgets and more.
Bigger Hardware
To scale up hosting operation, your first choice is often just using a more powerful server, VPS, or cloud instance. The point is not so much cloud vs. dedicated but that you simply bring more compute power to the problem. Cloud can make scaling up easier - often with a few clicks.
Division of Labor
The next step often is division of labor. You offload database, static content, caching or other items to specific servers or services. For example, you could offload static content to a CDN. You could a dedicated database.
Once again, cloud vs non-cloud is not the issue. The point is to bring more resources to your hosting problems.
Pick the Right Application Stack
I cannot stress enough picking the right underlying technology for your needs. For example, I've recently helped a client switch from a Apache/PHP stack to a Varnish/Nginx/PHP-FPM stack for a very business Wordpress operation (>100 million page views/mo). This change boosted capacity by nearly 5X with modest hardware changes.
Same App. Different Story
Also just because you are using a specific application, it does not mean the same hosting setup will work for you. I don't know about the specific app you are using but with Drupal, Wordpress, Joomla, Vbulletin and others, the plugins, site design, themes and other items are critical to overall performance.
To complicate matter, user behavior is something to consider as well. Consider a discussion form that has a 95:1 read:post ratio. What if you do something in the design to encourage more posts and that ratio moves to 75:1. That means more database writes, less caching, etc.
In short, details matter, so get a good understanding of your application before you start to scale out hosting.
A hosting service is part of the solution. Another part is proper server configuration.
For instance this guy has optimized his setup to serve 10 million requests in a day off a micro-instance on AWS.
I think you should look at your server config first, then shop for other hosts. If you can't control server configuration, try AWS, Rackspace or other cloud services.
just an FYI: You can sign up for AWS and use a micro instance free for one year. The link I posted - he just optimized on the same server. You might have to upgrade to a small server because Amazon has stated that micro is only to handle spikes and sustained traffic.
Good luck.

Basic AWS questions

I'm newbie on AWS, and it has so many products (EC2, Load Balancer, EBS, S3, SimpleDB etc.), and so many docs, that I can't figure out where I must start from.
My goal is to be ready for scalability.
Suppose I want to set up a simple webserver, which access a database in mongolab. I suppose I need one EC2 instance to run it. At this point, do I need something more (EBS, S3, etc.)?
At some point of time, my app has reached enough traffic and I must scale it. I was thinking of starting a new copy (instance) of my EC2 machine. But then it will have another IP. So, how traffic is distributed between both EC2 instances? Is that did automatically? Must I hire a Load Balancer service to distribute the traffic? And then will I have to pay for 2 EC2 instances and 1 LB? At this point, do I need something more (e.g.: Elastic IP)?
Welcome to the club Sony Santos,
AWS is a very powerfull architecture, but with this power comes responsibility. I and presumably many others have learned the hard way building applications using AWS's services.
You ask, where do I start? This is actually a very good question, but you probably won't like my answer. You need to read and do research about all the technologies offered by amazon and even other providers such as Rackspace, GoGrid, Google's Cloud and Azure. Amazon is not easy to get going but its not meant to be really, its focus is more about being very customizable and have a very extensive api. But lets get back to your question.
To run a simple webserver you would need to start an EC2 instance this instance by default runs on a diskdrive called EBS. Essentially an EBS drive is a normal harddrive except that you can do lots of other cool stuff with it like take it off one server and move it to another. S3 is really more of a file storage system its more useful if you have a bunch of images or if you want to store a lot of backups of your databases etc, but its not a requirement for a simple webserver. Just running an EC2 instance is all you need, everything else will happen behind the scenes.
If you app reaches a lot of traffic you have two options. You can scale your machine up by shutting it off and starting it with a larger instance. Generally speaking this is the easiest thing to do, but you'll get to a point where you either cannot handle all the traffic with 1 instance even at the larger size and you'll decide you need two OR you'll want a more fault tolerant application that will still be online in the event of a failure or update.
If you create a second instance you will need to do some form of loadbalancing. I recommend using amazons Elastic Load Balancer as its easy to configure and its integration with the cloud is better than using Round Robin DNS or a application like haproxy. Elastic Load Balancers are not expensive, I believe they cost around $18 / month + data that's passed between the loadbalancer.
But no, you don't need anything else to do scale up your site. 2 EC2 instances and a ELB will do the trick.
Additional questions you didn't ask but probably should have.
How often does an EC2 instance experience hardware failure and crash my server. What can I do if this happens?
It happens frequently, usually in batches. Sometimes I go months without any problems then I will get a few servers crash at a time. But its defiantly something you should plan for I didn't in the beginning and I paid for it. Make sure you create scripts and have backups and a backup plan ready incase your server fails. Be ok with it being down or have a load balanced solution from day 1.
Whats the hardest part about scalabilty?
Testing testing testing testing... Don't ever assume anything. Also be prepared for sudden spikes in your traffic. You have to be prepared for anything if you page goes from 1 to 1000 people over night are you prepared to handle it? Have you tested what you "think" will happen?
Best of luck and have fun... I know I have :)

Resources