Where do you get info about 'how to build scalable, high perfomance web app'? I mean architecture, best practice ets. regardless of platform and language: .net, php, java ...
Did you get your own 'epic fails' in your project and then refactor your system in a few nights or get info from internet?
Is there any communities where I could share my own expirience and get some response?
Yeah, I know that every project is individual.
You can read the High Scalability blog. If you have questions about architechture and scalability, you can always use StackOverflow unless the question is subjective.
It is not easy to answer this question. Language and Platform takes a secondary place when thinking about scalability.
"Scalability is actually a property of a system, not an individual layer of that system, infrastructure. Even with the best, sexiest, most automatic scaling layer, you can easily write code that just doesn't scale. - glyph"
How ever, you can immerse into a very good collection of resources on this issue at
http://www.royans.net/arch/library/
Just focusing on the web part of scalable web architecture, you might want to take a look at these 7 reasons why you should be using XMPP instead of AJAX especially if your web app needs to scale with lots of real-time social features.
Related
I'm wondering if there are any utilities/patterns/paradigms/standards for monitoring React applications in production.
I've seen a lot of documentation about React performance debugging that recommends the Chrome Dev Tools (which are great, but aren't a passive way to monitor end user performance)
How could I log data to know how long users are waiting for components to mount or render?
The only thing I've thought of so far is creating a Loggable[Pure]Component that extends React.[Pure]Component whose constructor, componentWillMount/Update, and componentDidMount/Update methods log render/mount times to a server. Then, components I want to monitor can extend these components and, if need be, call super() in the lifecycle methods before doing their own work. To specifically know which components these metrics go to, I'd have to expose a method in the Loggable[Pure]Component class that does something silly like setUniqueId and then each derived class would have to call it in the constructor.
This all seems terrible and I'm very much hoping there are some things people out there have implemented, but I haven't found anything thus far.
I would have a look at some APM tools, they handle the frontend monitoring, and the backend monitoring as well. They all support react, and folks use these all the time for that use case. It really depends on your goals in the monitoring, are you doing this for fun? Do you have a startup? Are you working for a large enterprise? There are 3 major players in this market.
AppDynamics - Enterprise APM, handles the most complex apps. Unified product offering delivered SaaS or on-premises. Has deep database, server, and other monitoring.
Dynatrace - Enterprise APM, handles complex apps well. Fragmented portfolio, but the SaaS product is good. The SaaS product has limited depth in some ways. Handles server and cloud infrastructure monitoring well.
New Relic - Easy and cheap(er than others), not as in-depth as some other options. Tends to be popular with small companies. Does a good job monitoring cloud infrastructure services.
These products all do what you are looking for, but it depends on your goals with the data and how you plan to analyze it.
If you want something free and less functional there are ways to do this with open source, but you'll have to stand up and manage a pretty complex stack. Here is one option.
Check out boomerang, which can log/extract the metrics you are looking for, it doesn't "understand" react, but it should work. This data can be posted to many different systems. The best suited is likely the ELK stack (open source log analytics, and more). Here is one of several examples which marries these two together to provide analysis of the browser performance https://github.com/naukri-engineering/NewMonk
I have a question about microservice implementation. right now I am using an api gateway to process all get request to my individual services and using kafka to handle asynchronous post put and delete request. Is this a good way of handling of handling request in a microservice architecture?
Your question is too unspecific to give a good answer. What is a good architecture totally depends on the details of your use cases. Are you serving web pages, streaming media, amass data for analysis, or something completely different? We would also need to know what are you requirements in terms of concurrency, consistency and scalability? What are the constraints for budget/size of development teams, ease of development, dev skills, etc?
For example the decisions you have taken may be considered good if you have strong requirements for a highly scalable input of large data sets and very frequent data collection as well as the team to support it. But it may be considered bad if you have a small team only and are trying to get a quick and cheap MVP for a new service that has limited scalability requirements (because the complexity of the solution slows down your development unnecessarily).
It may be good because the development team is familiar with those technologies and can effectively develop with those. Or it may be bad because your team does not know anything about those and the investment in learning those will not be justifiable by long term gains.
Don't forget that one of the ideas of the microservices architectural style is that each service can be owned by a distinct team that makes its own decisions about what technology to use for implementation (for whatever reason: ease of development, business reasons etc). So in other words the microservices style embraces the old wisdom architecture follows organization.
Here a link to a recommended further read.
I'd like to become a guru in high performance (100k and more views/requests) web & web-services applications.
What technologies/patterns/skills do you reccomend to look at?
Basically, I have good skills at ASP.NET/.NET based web development, but I'd like to know how big things are built (on any platform, not depending on .net technology stack).
Thank you.
For web/webservices the most common thing would be the data retrieval part
so you would need to concentrate first on sql performance tuning (indexes, sp fine tuning etc)
For web sites you would need to look # things like js minimize, server side rendering etc
In addition learning how to read performance counters, fiddler output will help point to probable performance bottlenecks
If you are concerned about performance with Web services [consuming and producing] you might want to research alternative web service packages and replacing the XML serialization/deserialization bits. XML serial/de is one of the slowest most painful processes surrounding the delivery of data. [at least when it comes to processing].
Other than: look for the bottle necks in the order of the largest, to the smallest.
What makes a site good for high traffic?
Does it have more to do with the hardware/infrastructure, or with how one writes the software, using Java as the example, if it matters?
I'm wondering how the software changes just because it is expected that billions of users will be on the site, if at all.
My understanding up to this point is that the code doesn't change, but that it is deployed on multiple servers, in a cluster, and a load balancer distributes the load, so really, on any one server/deployment, the application is just as any other standard application/website.
I highly recommend reading Jeff Atwood's blog on Micro-Optimization. In previous blogs he talks somewhat about how this site was created and the hardware upgrades he has had (which quickly summarized said that better hardware performs better only the extent that it is faster/better), but the real speed of a site comes from good programming, and this article seems like it should sum up some of your site programming questions quite well.
Hardware is cheap. Programming is expensive.
There are some programming techniques to make sure your code can handle multiple simultaneous views/updates. If you're using an existing framework, much of that work is (hopefully) done for you, but otherwise you're going to find stuff that worked for a few hundred hits an hour on one server isn't going to work when you're getting hundreds of thousands of hits and you have to deploy multiple load balancing machines.
Well, it is primarily an issue of hardware scaling but there are a few things to keep in mind with respect to the software involved in scaling. For example, if you are on a server farm, you'll need to work with a session management server (either via SQL Server or via a state server - which has implications in that your session variables need to be serializable).
But, in the bigger picture, there are a variety of things that you would want to do to scale to an enterprise level. For example, it becomes particularly important that you abstract out your database calls to a DAL because you may well need to adopt the use of a middleware package for high volume environments.
I've been reading through a couple of questions on here and various articles on MVC and can see how it can even be applied to GUI event intensive applications like a paint app.
Can anyone cite a situation where MVC might be a bad thing and its use ill-advised?
EDIT: I'm specifically talking about GUI applications here!
I tried MVC in my network kernel driver. The patch was rejected.
I think you're looking at it kind of backwards. The point is not to see where you can apply a pattern like MVC, the point is to learn the patterns and recognize when the problem you are trying to solve can naturally be solved by applying the pattern. So if your problem space can be naturally divided into model, view and controller then it is a good candidate for MVC. If you can't easily see which parts of your design fall into the three categories, it may not be the appropriate pattern.
MVC makes sense for web applications.
In web applications, you process some data (on SA: writing questions, adding comments, changing user info), you have state (logged in user), you don't have many different pages, but a lot of different content to fit into those pages. One Question page vs. a million questions.
For making CMS, for example, MVC is useless. You don't have any models, no controllers, just a pages of text with decorations and menus. The problem is no longer processing data - the problem now is serving that text content properly.
Tho, CMS Admin would build on top of MVC just fine, it's just user part that wouldn't.
For web services, you'd better use REST which, I believe, is a distinct paradigm.
WebDAV application wouldn't benefit greatly from MVC, either.
The caveat on Ruby for Web programming is that Rails is better suited for building Web applications. I’ve seen many projects attempt to create a WebDAV server or a content management system CMS with Rails and fail miserably. While you can do a CMS in Rails, there are much more efficient technologies for the task, such as Drupal and Django. In fact, I’d say if you’re looking at a Java Portal development effort, you should evaluate Drupal and Django for the task instead.
Anything where you want to drop in 3rd party components will make it tough to work in the MVC pattern. A good example of this is a CMS.
Each component you get will have their "own" controller objects and you won't be able to share "control" of model -> ui passing.
I don't necessarily know that MVC is ever really a bad idea for a GUI app. But there are alternatives that are arguably better (and also arguably worse depending on whose opinion you're asking). The most common is MVP. See here for an explanation: Everything You Wanted To Know About MVC and MVP But Were Afraid To Ask.
Although I suppose it might be a bad idea to use MVC if you're using a framework or otherwise interacting with software that wasn't designed with MVC in mind.
In other words, it's a lot like comparing programming languages. There's usually not many tasks that one can say that one is better than the other for. It usually boils down to programmer preference, availability of libraries, and the team's experience.
MVC shouldn't be used in applications where performance is critical. I don't know if this still applys with the increase of computing power but one example is a call center application. If you can save .5 seconds per call entering and updating information those savings add up over time. To get the last bit of performance out of your app you should use a desktop app instead of a web app and have it talk directly to the database.
When is it a bad thing? Where ever there is another code-structure that would better fit your project.
There's countless projects where MVC wouldn't "fit", but I don't see how a list of them would be of any benefit..
If MVC fits, use it, if not, use something else..
MVC and ORM are a joke....they are only appropriate when your app is not a database app, or when you want to keep the app database agnostic. If you're using an RDBMS that supports stored procedures, then that's the only way to go. Stored procs are the preferred approach for experienced application developers. MVC and ORM are only promoted by companies trying to sell products or services related to those technologies (e.g. Microsoft trying to sell VS). Stop wasting your time learning Java and C#, focus instead on what really matters, Javascript and SQL.