I have a simple web application to conduct online exams for the college students. All questions are multiple choice questions. Around 5000 users will be taking up the exam. My backend is mysql and using PHP as the front end. I want to know the hardware configuration for the servers that will be required to host this application and work seamlessly for the required no of users.
I am also looking out for cloud solutions. If I choose Amazone EC2 instances, can some body give me advice on what type of EC2 machine I should go into for this application?
It is impossible to tell the exact specs of the servers that will be required to run your setup, because there are too many variables. However, it is definitely a good question: when I was a student at university, it happened that a professor tried to do this, and didn't do testing: on the exam date, the system got overloaded and the exam had to be cancelled!
Start with testing what you already have. You can use something like the ab tool or JMeter. It will simulate the requested load for you automatically, so you can check how your actual server performs, and act accordingly.
Application design is also important. Like you can cache all the question at web layer to avoid database query. Make client heavy app such that server payload is minimum (json response) to reduce download time load on server.
Request multiple questions at once and Batch user responses to answer question together to decrease ajax calls.
Make use of nosql solution to avoid RDMS constraints overhead.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question once at an interview:
"Suppose you own a website where the server is at some remote location. One day, some user calls/emails you saying the site is abominably slow. How would you identify why the site is slow? Also, when you check the website yourself as any user would (using your browser), the site behaves just fine."
I could think of only one thing (which was shot down):
Check the server logs to analyse incoming traffic. Maybe a DoS attack or exceptionally high traffic. Interviewer told me to assume the server has normal traffic and no DoS.
I was kind of lost because I had never thought of this problem. I have almost no idea how running a server/website works. So if someone could highlight a few approaches, it would be nice.
While googling around, I could find only this relevant, wonderful article. That article is kind of too technical for me now, but I'm slowly breaking it down and understanding it.
Since you already said when you check the site yourself the speed is fine, this means that (at least for the pages you checked) there is nothing wrong with the server and it can serve those pages at a good speed. What you should be figuring out at this point is what the difference is between you and the user that reports your site is slow. It might be a lot of different things:
Is the user using a slow network connection (mobile for example)?
Does the user experience the same problems with other websites hosted at the same webhoster? If so, this could indicate a network problem. Normally this could also indicate a resource problem at the webserver, but in that case the site would also be slow for you.
If neither of the above leads to an answer, you could assume that the connection to the server and the server itself are fine. This means the problem must be in the users device. Find out which browser/OS he uses and try to replicate the problem. If that fails find out if he uses any antivirus or similar software that might cause problems.
This is a great tool to find the speed of web pages and tells you what makes it slow: https://developers.google.com/speed/pagespeed/insights
I think one of the important thing that is missing from above answers is the server location, which can play a vital in web performance.
When someone is saying that it is taking a longer time to open a web page that means high latency. High latency can be caused due to server location.
Let's assume as you are the owner of the web page then the server and client are co-located, so it will have a low latency.
But, now if client is across the border, then latency time will increase drastically. And hence a slow perfomance.
Another factor is caching which drastically affects the latency time.
Taking the example of facebook, they have server all over the world to reduce the latency time (and also to provide several other advantages) and they use huge caching system to cache their hot data (trending topics) whereas cold data (old data) are stored in hard disk so it takes a longer time to load an older photo or post.
So, a user might would have complained about this as they were trying load up some cold data.
I can think of these few reasons (first two are already mentioned above):
High Latency due to location of client
Server memory might need to be increased
Number of service calls from the page.
If a service could be down at the time of complaint, it could prevent page from loading.
The server load might be too high at the time of the poor experience. The server might need to increase the resources (e.g. adding another server/web server to the cluster).
Check if there was any background job running on the server at that time.
It is important to check the logs and schedules of the batch jobs to determine what all was running at that time.
Hope this help.
Normally the user takes the page loading time as a measure to find out that the site is slow. But if you really want to know that what is taking the maximum time the you can open the browser debugger by pressing f12. if your browser is chrome the click on network and see what calls your application is making and which are taking maximum time. If you are using Firefox the you need to install firebug. If you have that, then again press f12 and click on Net.
One reason could be the role of the user is different of your role. You might be having suppose an administrator privilege (some thing like super user role) and the code might be just allowing everything for such role that means it does not really do much of conditional checking to see what is allowed or not. Some times, it's a considerable over ahead to get all the privileges of the user and have the conditions checking, how course depends how how the authorization is implemented. That means, the page might be really slow for specific roles. Hence, you should find out the roles of the user and see if that is a reason.
Obviously an issue with the connection of the person connecting to your site OR it's possible it was a temporary issue and by the time you checked your site, everything was dandy. You could check your logs or ask your host if there was an issue at the time the slow down occured.
This is usually a memory issue and it can be resolved by increasing the Heap Size of the Web Server hosting the application. In case the application is running on Weblogic Server. Heap size can be increased in "setEnv" file located in Application Home.
Goodluck!
Michael Orebe
Though your question is quite clear, web site optimisation is a very extensive subject.
The majority of the popular web developing frameworks are for some reason, extremely processor inefficient.
The old fashioned way of developing n-tier web applications is still very relevant and is still considered to be best practice according the W3C. If you take a little time to read the source code structure of the most popular web developing frameworks you will see that they run much more code at the server than is necessary.
This may seem a bit of a simple answer but, the less code you run at the server and the more code you run at the client the faster your servers will work.
Sometimes contrasting framework code against the old fashioned way is the best way to get an understanding of this. Here is a link to a fully working mini web application which represents W3C best practices and runs the minimum amount of code at the server and the maximum amount of code at the client: http://developersfound.com/W3C_MVC_EX.zip this codebases is also MVC compliant.
This codebase comes with a MySQL database dump, php and client side code. To see this code in action you will need to restore the SQL dump to a MySQL instance (sql dump came from MySQL 8 Community) and add the user and schema permissions that are found in the php file (conn_include.php); setting the user to have execute permissions on the schema.
If you contrast this code base against all of the most popular web frameworks, it will really open your eyes to just how inefficient these frameworks are. The popular PHP frameworks that claim to be MVC frameworks aren’t actually MVC compliant at all. This is because they rely on embedding PHP tags inside HTML tags or visa-versa (considered very bad practice according the W3C). Also most popular node frameworks run way more code at the server than is necessary. Embedded tags also stop asynchronous calls from working properly unless the framework supports AJAX dumps such as Yii 2.
Two of the most important rules to follow with MVC compliance is: never embed server side tags (such as PHP tags) in HTML tags or visa-versa (unless there is a very good excuse such as SEO) and religiously never write code to run at the server if it can be run at the client. Also true MVC is based on tier separation, where as the MVC frameworks are based on code separation. True MVC compliance is very processor efficient. Don’t get me wrong MVC frameworks are very useful for a lot of things, but if you’re developing a site that is going to get millions of hits, they are quite useless, or at least they will drive your cloud bills so high that it will really eat into your company’s profits.
In summary frameworks don’t give much control over what code runs at the client or server and are very inefficient but you can get prototypes up and running quicker with less code.
In contrast the old fashioned way takes a bit more elbow grease but you have complete control over what runs at the server and what runs at the client.
As an additional bit of advice for optimisation avoid using pass-through queries and triggers and instead opt for stored procedures. Historically stored procedures weren’t invented at the time MVC was present as a paradigm but it definitely increases separation of concerns between the tiers and is much more processor efficient.
Hope this advice helps.
I'm trying to build a shopping-cart-like webapp on GAE. So far, I haven't deployed anything on GAE still and just keep doing some POC locally... then, I read this:
http://borglin.net/gwt-project/?page_id=688
It surprised me when I read through those "weaknesses" and I'm pretty worried about whether it's a good choice putting the app on GAE. Would someone, esp who has experience building a real-world app with cash transactions, please give me some ideas/share your thoughts?
The article said GAE has "No https support for your domain". Is it true? I thought what I need to do is just point my domain https://www.abc.com to GAE https://abc.appspot.com ...
For BigTable, I understand it would be quite hard to build analytic/statistical functionalities in my app (e.g. provide a monthly transaction summary). Does anyone has any experience to handle such situation...? export data from BigTable to RDBMS and do some SQL ...?
The articale also said that BigTable has a bad write performance: "I'm lucky to get 100 writes per second in a GAE request. " Is it true? I cannot find any figure to support/disprove it ..
I'm now using SpringMVC + Objectify at server-side. Is it too heavy for GAE ? Some said Spring could make a long cold start ... how cold is it? How long would it take to init an app with like, 20+ different pages/controllers, and 20+ kinds of entities/DAO ..?? Any ballpark figure ..??
p.s. If you know any real world app built on GAE, please kindly share here? Because I wanna know how far (or how big) my app could be.
Thanks a lot!
1) That is true. https is only supported for .appspot.com. A very big shortcoming.
2) That's not really true, you can do any kind of monthly summary reports using the remoting api if you need to do complex joins and such. You can also export the data and use an offline tool
3) I haven't seen that kind of write performance bottleneck, but there is eventual consistency to deal with. That said, 100 writes/second is not a small number...
4) I would avoid spring on appengine. A lot of people use it happily but I found that startup time was very slow and that caused problems.
The SSL for Custom Domains is in testing phase. Please note that HTTPS/SSL was not designed to work on shared-IP hosting (such as GAE), so there are some SSL extensions that have various support on browsers (SNI/VIP).
Queries are quite weak on Datastore. They are also expensive. There are two ways to do analytics:
a. Create a set of sharded counters and update the counter every time an event happens (= a financial transaction). We use this and is works well. The only downside is that this is "online" analytics. You can not add additional analytics parameters for the past data.
b. Upload (anonymized) data to Google Big Query and do analytics there.
Datastore has a limitation of about 5 writes-updates/s to a SINGLE entity or entity group (some sources say 1 w/s). There is no limitation on parallel writes to different entities. Remember, GAE is a distributed system where all apps use ONE BigTable database under the hood. So this is pretty scalable.
I don't have experience with this, but there are various reports on the net. See this http://www.listry.com/blog/2010/03/google-app-engine-cold-start-guide-for
I dont know about other topics, but what I can tell you is that we use a combination of Guice + jersey to substitute Spring :) its better for GAE if we take into account rhe startup time
We are building a mobile app with a rails CMS to manage it.
What our app look like?
Every admin user of the app can set one private channel with very small amount of data -
About 50 short strings.
Users can then download the app and register few different channels and fetch the data from the server to their devices. The data will be stored locally and will not be fetched again unless the admin user will update the data (but we assume that it won't happen so often). Every channel will be available to not more then 500 devices.
The users can contribute to the channel but this data will be stored on S3 and not on the database.
2 important points:
Most of the channels will be active for 5 months and not for 500 users +-. But most of the activity will happen on the same couple of days.
Every channel is for small amout of users (500) But we hope :) to get to hundreds of thousens of admin users.
Building the CMS with rails we saw that using SimpleDB is more strait-forward then using DynamoDB. But, as we are not server experts, we saw the limitations of SimpleDB and we don't know if SimpleDB could handle the amount of data transfer that we will have (if our app will succeed). another important point is that DynamoDb costs are much higher and not depended on the use while SimpleDb will be much cheaper at the beginning.
The question is:
Does simpleDB can feet our needs?
Could we migrate later to dynamoDB if our service will grow in the future ?
Starting out with a new project and not really knowing what to expect from the usage i'd say that the better option is to go with SimpleDB. It doesn't sound like your usage is going to be very high SimpleDB should be able to handle that no problem. The real power of dynamoDB comes in when you really have a lot of load. You don't fall into that category it seems.
If you design your application correctly switching between SimpleDB and DynamoDB should be a simple task if you decide at some point that SimlpeDB is not working out. I do these kind of switches all the time with other components in my software. Since both databases are NoSQL you shouldn't have a problem converting between the two. Just make sure that any any features you use in SimpleDB are available in DynamoDB. Make sure to design your database design for both DynamoDB has stricter requirements using indexes make sure that the two will be compatible.
That being said. Plenty of people have been using SimpleDB for their applications and I don't expect that you would see any performance problems unless your product really takes off, at which time you can invest in resources to move to DynamoDB.
Aside from all that we have the pricing, like you already mentioned. SimpleDB is the obvious solution for your use case.
I have been tasked with looking for a performance testing solution for one of our Java applications running on a Weblogic server. The requirement is to record production requests (both GET and POST including POST data) and then run these requests in a performance test environment with a copy of the production database.
The reasons for using production requests instead of a test script are:
It is a large application with no existing test scripts so it would be a a large amount of work to write scripts to cover the entire application.
Some performance issues only appear when users do a number of actions in a particular order.
To test using actual user interaction with the system not an estimation at how the users may interact with the system. We all know that users will do things we have not thought of.
I want to be able to fix performance issues and rerun the requests against the fixed code before releasing to production.
I have looked at using JMeters Access Log Sampler with server access logs however the access logs do not contain POST data and the access log sampler only looks at the request URL so it cannot simulate users submitting form data.
I have also looked at using the JMeter HTTP Proxy Server however this can record the actions of only one user and requires the user to configure their browser to use the proxy. This same limitation exist with Tsung and The Grinder.
I have looked at using Wireshark and TCReplay but recording at the packet level is excessive and will not give any useful reports at a request level.
Is there a better way to analyze production performance considering I need to be able to test fixes before releasing to production?
That is going to be a hard ask. I work with Visual Studio Test Edition to load test my applications and we are only able to "estimate" the users activity on the site.
It is possible to look at the logs and gather information on the likelyhood of certain paths through your app. You can then look at the production database to look at the likely values entered in any post requests. From that you will have to make load tests that approach the useage patterns of your production site.
With any current tools I don't think it is possible to record and playback actual user interation.
It is possible to alter your web app so that is records and logs every request and post against session and datetime. This custom logging could be then used to generate load test requests against a test website. This would be some serious code change to your existing site and would likely have performance impacts.
That said, I have worked with web apps that do this level of logging and the ability to analyse the exact series of page posts/requests that caused an error is quite valuable to a developer.
So in summary: It is possible, but I have not heard of any off the shelf tools that do it.
Please check out this Whitepaper by Impetus Technologies on this page.. http://www.impetus.com/plabs/sandstorm.html
Honestly, I'm not sure the task you're being asked to do is even possible, let alone a good idea. Depending on how complex the application's backend is, and how perfect you can recreate the state (ie: all the way down to external SOA services or the time/clock), it may not be possible to make those GET and POST requests reproduce the same behavior.
That said, performance testing against production data is always great, but it usually requires application-specific knowledge that will stress said data. Simply repeating HTTP GETs and POSTs will almost certainly not yield useful results.
Good luck!
I would suggest the following to get the production requests and simulate the accurate workload:
1) Use coremetrics: CoreMetrics provides such solutions using which you can know the application usage patterns. This would help in coming up with an accurate workload model. This model can then be converted into test scripts and executed against a masked copy of production database. This will provide you accurate results about the application performance in realtime.
2) Another option would be creating a small utility using AOP (Aspect oriented apporach) so that it can trace all the requests and corresponding method traces. This would help in identifying the production usage pattern and in turn accurate simulation of workload. AOP frameworks such as AspectJ can be used. This would not require any changes in code. The instrumentation can be done on the fly. The other benefit would be that thi cna only be enabled for a specific time window and then it can be turned off.
Regards,
batterywalam
We have a new project for a web app that will display banners ads on websites (as a network) and our estimate is for it to handle 20 to 40 billion impressions a month.
Our current language is in ASP...but are moving to PHP. Does PHP 5 has its limit with scaling web application? Or, should I have our team invest in picking up JSP?
Or, is it a matter of the app server and/or DB? We plan to use Oracle 10g as the database.
No offense, but I strongly suspect you're vastly overestimating how many impressions you'll serve.
That said:
PHP or other languages used in the application tier really have little to do with scalability. Since the application tier delegates it's state to the database or equivalent, it's straightforward to add as much capacity as you need behind appropriate load balancing. Choice of language does influence per server efficiency and hence costs, but that's different than scalability.
It's scaling the state/data storage that gets more complicated.
For your app, you have three basic jobs:
what ad do we show?
serving the add
logging the impression
Each of these will require thought and likely different tools.
The second, serving the add, is most simple: use a CDN. If you actually serve the volume you claim, you should be able to negotiate favorable rates.
Deciding which ad to show is going to be very specific to your network. It may be as simple as reading a few rows from a database that give ad placements for a given property for a given calendar period. Or it may be complex contextual advertising like google. Assuming it's more the former, and that the database of placements is small, then this is the simple task of scaling database reads. You can use replication trees or alternately a caching layer like memcached.
The last will ultimately be the most difficult: how to scale the writes. A common approach would be to still use databases, but to adopt a sharding scaling strategy. More exotic options might be to use a key/value store supporting counter instructions, such as Redis, or a scalable OLAP database such as Vertica.
All of the above assumes that you're able to secure data center space and network provisioning capable of serving this load, which is not trivial at the numbers you're talking.
You do realize that 40 billion per month is roughly 15,500 per second, right?
Scaling isn't going to be your problem - infrastructure period is going to be your problem. No matter what technology stack you choose, you are going to need an enormous amount of hardware - as others have said in the form of a farm or cloud.
This question (and the entire subject) is a bit subjective. You can write a dog slow program in any language, and host it on anything.
I think your best bet is to see how your current implementation works under load. Maybe just a few tweaks will make things work for you - but changing your underlying framework seems a bit much.
That being said - your infrastructure team will also have to be involved as it seems you have some serious load requirements.
Good luck!
I think that it is not matter of language, but it can be be a matter of database speed as CPU processing speed. Have you considered a web farm? In this way you can have more than one machine serving your application. There are some ways to implement this solution. You can start with two server and add more server as the app request more processing volume.
In other point, Oracle 10g is a very good database server, in my humble opinion you only need a stand alone Oracle server to commit the volume of request. Remember that a SQL server is faster as the people request more or less the same things each time and it happens in web application if you plan your database schema carefully.
You also have to check all the Ad Server application solutions and there are a very good ones, just try Google with "Open Source AD servers".
PHP will be capable of serving your needs. However, as others have said, your first limits will be your network infrastructure.
But your second limits will be writing scalable code. You will need good abstraction and isolation so that resources can easily be added at any level. Things like a fast data-object mapper, multiple data caching mechanisms, separate configuration files, and so on.