looking for Best guidance for WCF best performance testing [closed] - performance

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
when some one developing wcf service which will be consumed and used by thousand of people then what are the key point we need to keep in the mind to design web service which will deliver best performance. so please give me all the best tips to design wcf service which can help service to give best performance. thanks
another question what are tools and technique is used in the industry to test the performance of wcf service before hosting in production server. thanks

As for the design, make sure that service is (ok , cliches but still worth to mention)
Easy to scale
Stateless (Per-Call)
Uses no locks
Caches data
Well in general it should be "just" highly performant but it of course depends on your use-cases so it is more important that you know how your users will use the system :
you wrote thousand of people but you should have exact numbers defined
will the service will be used only on certain hours ? If so , maybe it's possible to compute / cache any data that will be heavily used before ?
what is the required throughput /number of calls per sec / avg. number of users working ?
what about peak volumes ? Is it used constantly or it's just users loading data at one time and then nothing ?
where is it going to be hosted ? IIS or self-hosted ? Can you control it ? How is the security plugged int ? Is security a concern ?
who calls your service ? Is SOAP ok ? Can you use REST ?
So the point is that to get best performance you need to have clearly defined goals like "I want to handle 1000 calls per sec and each call uses around 2MB of data" :)
As for the tools a best one is something that resembles your end-users so for final testing it could be a bunch of selenium tests and for perf testing even a console application spamming your endpoints will work but a key factor here is separation so that your services are hosted on different server then test-client

Related

Is it good to keep microservices with single technology or multiple? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
What is the recommended approach to decide the technology to use for creating miroservice?
ex: All 50 microservices running in .NET platform using SQL Server as
DB for each one of them
OR
Mix and match between different technology
ex : 15 Spring-based microservice with MongoDB, 15 .NET with SQL, 20 NodeJS microservice with Redis
Microservice with different technology
I know this will again come down to developers who are familiar with what technology but all I am looking to know is which approach you would have taken if you have more than 50 microservices.
It really depends on the role of each microservice. If all of them are REST APIs with a pretty similar functionality (but completely different scope), then it would be helpful to use the same tech stack, because:
You can optimize your development workflows
You get more homogeneity across your entire system, which translates into a number of benefits down the road (identify/fix issues faster, optimize resource usage, etc).
However, if you have some microservices which have different constraints in terms of performance (or consistency, or any other vector), you can use a different tech stack just for that one. The architectural model of microservices allows that - it doesn't matter what's behind a microservice as long as it exposes an API that can be used by other microservices.
TL;DR - if you have strong reasons to use different tech stacks for some microservices, you should do it, but keep in mind that it doesn't come without a cost.

Design strategy for Microservices in .NET [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What would be a good way for Microservices .NET to communicate with each other? Would a peer to peer communication be better (for performance) using NETMQ (port of ZeroMQ) or would it be better via a Bus (NServiceBus or RhinoBus)?
Also would you break up your data access layer into microservices too?
-Indu
A Service Bus-based design allows your application to leverage the decoupling middleware design pattern. You have explicit control in terms of how each Microservice communicates. You can also throttle traffic. However, it really depends on your requirements. Please refer to this tutorial on building and testing Microservices in .NET (C#).
We are starting down this same path. Like all new hot new methodologies, you must be careful that you are actually achieving the benefits of using a Microservices approach.
We have evaluated Azure Service Fabric as one possibility. As a place to host your applications it seems quite promising. There is also an impressive API if you want your applications to tightly integrate with the environment. This integration could likely answer your questions. The caveat is that the API is still in flux (it's improving) and documentation is scarce. It also feels a bit like "vendor lock".
To keep things simple, we have started out by letting our microservices be simple stateless applications that communicate via REST. The endpoints are well-documented and contain a contract version number as part of the URI. We intend to introduce more sophisticated ways of interaction later as the need arises (ie, performance).
To answer your question about "data access layer", my opinion would be that each microservice should persist state in whatever way is best for that service to do so. The actual storage is private to the microservices and other services may only use that data through its public API.
We've recently open sourced our .NET microservices framework, that covers a couple of the needed patterns for microservices. I recommend at least taking a look to understand what is needed when you go into this kind of architecture.
https://github.com/gigya/microdot

Design approach for hosting multiple microservices on the same host [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a Web application that I decoupled it in multiple containerized microservices. I have now around 20 services, but the whole system will definitely need more than 300. Most of the services now and some in the future will not need an entire machine so I'll deploy multiple services on a same host. I'm wondering how others deal with interservice communication. My preferred way was to go with a REST based communication but...
Isn't it too heavy to have multiple web servers running on the same machine? I'm developing in Ruby, but even a lightweight web server like Puma can consume a good amount of memory
I started writing a custom communication channel using UNIX sockets. So, I'd start one web server and my "router" app would communicate with the currently running services on that host through UNIX sockets. But I don't know if it's worth the effort and on top of that, all services have to be written and customized to use this kind of communication. I believe it would be hard to use any framework like Ruby-on-Rails or others, even different languages which is the whole appeal with microservices architecture. I feel like I'm trying to reinventing the wheel.
So, can someone suggest a better approach or vote for one of my current ones?
I appreciate any help,
Thanks,
Looks like you may want to look into docker swarm, they're actively working on these use cases. I wouldn't recommend building your own communication channel, stick with http or maybe use spdy if you're really concerned about performance. Anything you introduce will make using these upcoming solutions more difficult. Also keep in mind you don't need a heavy-duty web server in most cases, you can always introduce a layer above one or more of your services using nginx or haproxy for example.

How do you decide when to upgrade servers vs. add more servers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 days ago.
Improve this question
I run a small SAAS website with a (hopefully) growing number of customers. Right now, I have three Amazon EC2 instances. A micro instance runs the web frontend (Rails), a small instance runs the API layer (Rails), and a micro instance runs the data layer (Postgres). Please don't judge; this is more than adequate for my needs at this time.
As I add additional customers, I know that eventually I am going to have to a) increase the horsepower of the existing servers and/or b) load-balance the web/API servers and cluster the database servers.
My question is - how do you decide from a cost/benefit standpoint when to upgrade servers (i.e. micro-->small-->medium-->large) vs. adding additional servers of the same type? I understand that there are benefits to load balancing (such as keeping you online in the event of a server crash or an issue with an availability zone).
Obviously, Amazon charges more for servers with more memory and processing power, but they also charge for ELB and services like that. If I were to increase to two servers in each layer today, that would double my costs + the costs for ELB (not including data transfer costs). It seems like a bottleneck this early on would be best suited by upgrading to medium or better servers.
What are some good rules of thumb for when to build up as opposed to out? Please keep in mind that my choice of software (cough Rails) is very memory intensive when processing large amounts of data.
You mentioned some pros to scaling out(redundancy), but forgot the cons specifically more complex deployments, and increased overhead(more operating system resources being used).
It isn't just up vs out, it is up vs out for each layer. The db tier generally wants to scale up since it avoids clustering/replication headaches. The application tier can go either way. Web servers scale out nicely since they are handling requests and the requests are separate.
Specific to amazon and their specific pricing right now(https://aws.amazon.com/ec2/pricing/), it seems like scaling up vs out is about equivalent, with scaling up to about large being slightly ahead.
Scaling up has the benefit of faster CPU + more RAM, so your app's performance may increase as a result. Yes, there's a time when out vs. up will win, but from my perspective, we have chosen to scale up whenever possible (we're on AWS as well), as we notice a performance boost to our app each time we do, in addition to allowing for additional capacity as our user base grows.

How Much Traffic Can Shared Web Hosting Take? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have a cheap shared hosting plan with Reliablesite.net ($5/month).
I've been making a small site I want to start promoting in a few weeks and I was going to roadtest it by hosting it with the shared plan I already have.
My issue is that I don't know at what point I should move onto clustered hosting / dedicated hosting.
Questions
What pageviews / day can a
shared hosting plan be expected to
handle?
What can standard
shared database servers take without
choking up or me getting rude emails
from my hosting provider?
In my experience, shared hosting environment like Reliablesite.com can take around 10-20 000 unique users per day, or 100-200 000 pageviews/day. That number can vary, depending on your site. For optimization, It is important to reduce number of db queries (i keep it max 6-7 per page render), and be careful when programming. Using ASP.NET MVC gave nice perf improvement for me, but good written webforms app can perform well also. If you are using some other tech stack, like PHP/MySQL, i don't know the numbers.
When you exceed those numbers, you will have enough money from google adsense to go with VPS or dedicated plan.
Just to add something regarding page render / db queries performance: using multiple resultset sproc or query is great way to reduce number of db requests!
Traffic usually is not a problem on shared hosting. The only problem you may encounter is RAM and CPU restrictions. But if your application written correctly it could operate well with these limitations.
Hints:
user memory profiler to debug and optimize your web application
use CDN for storing media files
If you need some numbers, a properly written web application which use CDN for storing media files could handle at least 10k unique visitors per day on a shared hosting.
It would be best if you ask your provider these questions. Every provider is going to be different.
Generally what happens is that the provider can handle the requests, but they'll simply shut down your site once it reaches a certain threshold.
It also depends on the amount of bandwidth you have opted for. How much traffic are you expecting. My blog is in a shared hosting and and once 4k was my maximum in a day and I dint feel any difference in the performance. Dont worry unless your site appears in front page of digg or some high traffic websites link to you site.
I have been using mysql on shared hosting for a while mainly on informational websites that have gotten at most 300 visits per day. What I have found is that the hosting was barely sufficient to support more than 3 or 4 people on the website at one time without it almost crashing.
Theoretically i think shared hosting with most services could support about about 60 users per hour max efficiently if your users all came one or two at a time. This would equal out to about about 1500 users in one day. This is highly unlikely however because alot of users tend to be online at certain times of the day and you also have to throw in the fact that shared servers get sloppy alot due to abuse from others on the server.
I have heard from reliable sources that some vps hosting thats 40-50 dollars per month have supported 500,000 hits per month. I'm not sure what the websites configurations were though, i doubt the sites ran many dynamic db queries or possibly were simply static.
One other thing that is common on shared hosting is breaking up the file managers with the database hosting. Sometimes your files will do well appearing online but the database that runs your actual website will be lagging extremely due to abuse from your neighbors.
I suggest ensuring that your application is ready for large amounts of traffic, even if you are on a super duper webserver, but your app is badly written, you will loose potential clients. Some of the easiest optimizations that can be done to an existing web app is to reduce the number of DB connections, so read up on caching and partial caching.

Resources