How come that The response time is very different when calling the same action/page in different times of day? [closed] - performance

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
How come that The response time is very different when calling the same action/page in different times of day ? I'm working in an internal server where I'm the only one who uses the application (which doesn't work with internet connection)
I'm not connected to a network, and there is only one user who is running the app (which is me). It's a ASP site with a remote database

Once again, where are you going to start? You're seriously going to need to look at all aspects of the server that the application is on.
If you have a connected database then you'll need to look at whether:
the database is on a remote server - network issues can interfere quite heavily with your timings here.
the same server - if this is an instanced database you will need to take into account the performance impact of the service that is managing your database and all of the related aspects of that (e.g. do you have any kind of agents running background tasks for the database?).
Are you running a standalone database like Ms Access? - this may cause the least disruption in some ways but can be disastrous in others.
What type of web-application are you looking at?
A simple scripted non-managed IIS ASP site - Very little to manage via IIS here; no need to section off a pool for the application.
A full blown IIS managed application - IIS managed, passing of cookies, credentials etc (all takes slices of time).
If you are connected to a network, then...
How many users are on the network - Though every machine on the network may have a negligible impact on your application server or PC, there are definitely some that do, such as DNC servers and what have you; they need to gather network information for the successful management and running of the network as a whole. Your application server will also communicate with other servers to say things like: "Hi! I'm over here!".
Perhaps the most important question should be regarding your server(s):
What services are running - every service that runs on your server swallows time slices.
What services are not running on your server? - to keep your timings realistic should you stop any services or (more importantly) not?
What services are running on your database server? - just as important as your main application server, your database server needs time to furnish data to your application. If there are other services running on here then this can impact heavily on your time.
Please everyone, chip in here - there's just so much to take into account.
By not giving an adequate qualification for your task it's very difficult for anyone to give a wholly valid answer.

Related

Azure Web Application Gateway performance with load test [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a Visual Studio load test that runs through the pages on a website, but have experienced big differences in performance when using a load balancer. If I run the tests going straight to Web Server 1 bypassing the load balancer I get an average page load time of under 1 second for 100 users as an example. If I direct the same test at the load balancer with 2 web servers behind it then I get an average page load time of about 30seconds - it starts quick but then deteriorates. This is strange as I now have 2 web servers load balanced instead of using 1 direct so I expect to be able to increase load. I am testing this with Azure Web Application Gateway now, and Azure VMs. I have experienced the same problem previously with an NGinx setup, I thought it was due to that setup but now I find I have the same on Azure. Any thoughts would be great.
I had to completely disable the firewall to get the consistent performance. I also ran into other issues with the firewall, where it gave us max entity size errors from a security module and after discussing with Azure Support this entity size can not be configured so keeping the firewall would mean some large pages would no longer function and get this error. This happened even if all rules were disabled, I spent a lot of time experimenting with different rules on/off. The SQL injection rules didn't seem to like our ASP.NET web forms site. I have now simulated 1,000 concurrent users split between two test agents and the performance was good for our site, with average page load time well under a second.
Here are a list of things that helped me to improve the same situation:
Add non-SSL listener and use that (e.g. HTTP instead of HTTPS). Obviously this is not the advised solution but maybe that can give you a hint (offload SSL to the backend pool servers? Add more gateway instances?)
Disable WAF rules (slight improvement)
Disable WAF + Added more gateway instances (increased from 2 to 4 in my case) - SOLVED THE PROBLEM!

Migrate from monolith to Micro service architecture [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 3 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Improve this question
We are on the initial stages of designing a micro service for my client from their standard monolith app that is sitting on 4 JBOSS servers in their own data center. Is micro service architecture target at only cloud based deployment? Can i deploy a micro service on premise production ready tomcat /JBOSS? Is that a good fit?
Sure you can.
Microservice architecture is a concept of having many small interracting components, where each of them performing well defined part of work, but good.
It's extention of the Linux way and the concept of decoupling components.
In your case you can split your service to several smaller services. Each one with own development and deployment cycles, each one with well defined API.
Is micro service architecture target at only cloud based deployment?
no it's is an architecture for application development. basic idea of micro services is separate complex application function to small functions to reduce complexity and get high performance.
there are few reasons you need to consider before moving micro services.
1.scale of you application.
if your application contain high number of complex functions its better go with micro services. and separate them and deploy separate, then easy to do changes and maintains.
2.performance of application
if some application function need high computing power. you can allocate separate hardware resources. if you implement it as micro services.
3.deploy and maintain
if you use micro services you can deploy and maintain service separate without effect other services.
4.data migration
if your databases contain high data table relation it will little bit difficult remove for function databases(each micro services need each DB) so as a first step keep DB as monolithic and separate function to services. then start to reactor DB
5.call each services
fronted end application keep clean and logic free. and wrap your micro services using API gate way and publish all the services as one service.
6.application security
each and every services running in separate no need to session tracking use JWT (oAuth2) API security.
7.multiple services & transnational
if you need to handle one business function but with more than one service you need to check each and every services function work correctly**(ex db operations ,rollbacks)** so need to developed transnational handler
implementing micro services
there is no specific technology stack for it but there are free more technology available
ex :
java spring boot for micro services (with inbuilt tom cat server )
zuul , eureka for API gate way
oAuth 2 and JWT for security
*Note
there is not fix way to implementation for micro services , use correct technology stack to get performance and implement small business function. and doesn't matter hosting in cloud or local servers.
strong text
There is definitely no limitations whether you deploy your microservices on local, physical servers or in the cloud. Both approaches are valid, but they impose different advantages and disadvantages.
With local/physical servers, you will have:
bigger operations overhead (it is better you have good DevOps in your team)
manual scaling (when you experience bigger traffic, you need to manually fire up new instances, or use some management tool for this)
manual fault detection - if a server goes down (this depends on your/company's server enviorenment) someone will need to fix this "manually"
it is cheaper (a friend is buying old server instances on Amazon and running their semi-microservice architecture on them, he calculated they achieve quite big savings this way)
With cloud infrastructure, you get some of the below advantages (in contrary to above disadvantages):
less operations overhead (the cloud will take care of most of operations)
flexible scaling (when your traffic goes up, cloud can automatically fire up new instances, when it goes down, it will shutdown instances)
error/fault handling - if there occurs a problem in the cloud, you do not need to worry
I did not mention all the advantages and disadvantages of given approaches, as it also depends on the project (will it receive different traffic on different times of day, does it need to keep data locally or can it be in a foreign country in a cloud, ...).

web service client in GAE production is too slow [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 days ago.
Improve this question
I have a Java based web application that is hosted in google app engine. There is a simple web service call to the Amazon Product and Advertising API to look up for books when the user inputs a title. Everything runs fine on my local development environment. However, the web service call is annoyingly slow on production.
E.g. When I invoke the web service call in my dev environment, it takes about 3-4 seconds to get the response back. In production, the same call to the same API would take 15-16 seconds. There is no datastore activity involved at this moment, just a web service call and display the results.
I am pretty sure that this is not the initial load issues others are talking about regarding GAE in production. It has been consistently slow no matter if the load is warmed up. I have tried to search everywhere but nobody seems to be complaining about the same issue. Does anyone have any clue what this might be? Is there any good tool to tackle this kind of performance issue? Thank you!
Here is my update as of 01/23/2012:
I have identified the bottleneck - it takes about 10 seconds to get the port from Amazon Service (I was using SOAP based web service client). My solution is to use RESTful client and the performance is greatly improved. Now it only takes 1 sec to get the information back from Amazon.
The speed of response of Amazon APIs has nothing to do with the performance of GAE.
It's more likelly that Amazon throttles access to their APIs per IP. Since GAE is a shared service, having a set of common IPs, it might be that there are other apps on GAE calling Amazon contributing to delay. If this continues to be a problem then you might want to setup a proxy server somewhere (Amazon EC2?).

Will it ever be possible for developers to not have to worry about server configuration? Should we have to worry about this? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently looking at hosting solutions for my Ruby on Rails SaaS web application, and the biggest issue I see is that if I go with something like Amazon EC2, then I still need to configure my own server and install what I need (e.g. database, programming framework, application server, etc.). Each one of these is an opportunity for something to go wrong. I also have to worry about how my data is getting backed up, how frequently, and a host of other "low-level" details. Being a startup I don't have the resources for a sysadmin so would have to play one myself. I currently do some work for a startup and my boss is always talking about how great EC2 is because it let's us "get out of the hardware business" - in reality though, it doesn't feel that way because we still have to set up the server instances, still have to install software, still have to configure the software properly. It feels like we're still in the hardware business, just that we don't really own the server we're using.
In contrast is a service like Heroku (which actually uses EC2 underneath, I believe) but basically takes care of all the low-level details. They do automatic backups for me, I just specify the frequency. They have a server configuration already set up. They have ways to manage it and keep it running so I don't have to monitor traffic. I can focus on my application and just deploy the code, and let them worry about administration and making sure the database is properly configured with the web server and the right folders have permissions.
The problem with Heroku is obviously that I don't have control over these things if I wanted to modify it. Heroku uses nginx as it's web server; if I want to use Phusion Passenger on Apache to stay on the "cutting edge" of RoR development, I'm SOL. If I need to make a quick patch in production (Root of all evil, I know, but it happens sometimes) I don't have SSH access to Heroku's servers. If I need to set up a new database user to allow somebody else to remotely access data, I don't think I can do this. And worst of all if something does happen with the server, I have no way of doing anything except wait for Heroku to fix it.
Basically at what point, if ever, can we as developers focus on our code and application and not have to play sysadmin with server configuration? As a startup with limited resources and limited knowledge of configuring servers (enough to get by), would I be better off sacrificing some configurability for the ability to let somebody else worry about the hardware/software end of things?
Make the server config part of your project and use scripts to setup and tear down your servers. Keep everything under VCS and use the scripts routinely to recreate your development setup.
https://stackoverflow.com/questions/162144/what-is-a-good-ruby-on-rails-hosting-service/265646#265646
I'm not interested in learning how to
configure Apache, ModRails, Phusion,
Mongrel, Thin, MySQL, and whatever.
With Heroku I don't worry. nginx is
the web server, and PostgreSQL is the
database. They have settled on
Ruby/Rack for all new apps. Frameworks
that run on Rack include Rails, Merb,
and Sinatra. Limited choices.

Windows Licensing Question [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
This is slightly off topic of programming but still has to do with my programming project. I'm writing an app that uses a custom proxy server. I would like to write the server in C# since it would be easier to write and maintain, but I am concerned about the licensing cost of Windows Server + CALS vs a Linux server (obviously, no CALS). There could potentially be many client sites with their own server and 200-500 users at each site.
The proxy will work similar to a content filter. Take returning web pages, process based on the content, and either return the webpage, or redirect to a page on another webserver. There will not be any use of SQL server, user authentication, etc.
Will I need Cals for this? If so, about how much would it cost to setup a Windows Server with proper licensing (per server, in USA)?
This really is an OT question. In any case, there is nothing easier than contacting your local MS distributor. As stackoverflow is by nature an international site, asking a question like that, where the answer is most likely to vary by location (MS license prices really are highly variable and country-specific) is in my opinion not likely to receive an useful answer.
I realize this isn't exactly answering your question but if you want to use Linux, maybe you want to look into using Mono. .Net on Linux.
If users will not be actually connecting to any MS server apps (such as Exchange, SQL Server, etc) and won't be using any OS features directly (i.e. connecting to UNC paths) then all that should be required is the server license for the machine to run the OS. You need Windows Server CALs when clients connect to shares, Exchange CALs for mail clients, and SQL Server CALs for apps that connect to your databases. If the clients of your server won't be connecting to anything but the ports offered by your service, you should be in the clear, and it shouldn't cost any more to build a server for 100 users than 10.
You may not need any CALs for users depending on how you use the server. Certain functionality requires the purchase of CALs but some doesn't. There's no real good way to answer this question since the requirements are too vague. Does it use domain services? Does it use SQL server? Clustering? There are many variables.
If you are looking at what the most you could possibly pay, go to CDW and look at the Open License/Open Business products to get an estimate.
Like said above, if you are using your own connections and nothing else on the server you wont need the cals.
I would Google the ROI on Linux vs Windows for a commercial server, I have no option generally on this, but I have seen that long term they level out, in the grand scheme of things the initial cost of the Windows license is actually minimal and insignificant.
Choose the best technology to solve the end users problem, document why, provide an evaluation report, include maintenance costs, development costs etc. When you do this the answer will be clear to you and your customer.
If your users are not connecting to any other windows resources (Active Directory, SQL Server, File Shares, etc) then you shouldn't need CALs but you I believe there is something like an external connector license. There's also a 'web edition' which looks like it's in the range of ~$400.
Also it looks like Microsoft will be removing the CAL restrictions on web servers completely in Windows Server 2008
Microsoft should call their licensing division Enigma...

Resources