Website Performance Issue [closed] - performance

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
If a website is experiencing performance issues all of a sudden, what can be the reasons behind it?
According to me database can one reason or space on server can be one of few reasons, I would like to know more about it.

There can be n number of reasons and n depends on your specification
According to what you have specified you can have a look at,
System counters of webserver/appserver like cpu, memory, paging, io, disk
What changes you did to application if any, were those changes performance costly i.e. have a round of analysis on those changes to check whether any improvement is required.
If system counters are choking then check which one is bottleneck and try to resolve it.
Check all layers/tiers of application i.e. app server, database, directory etc.
if database is bottleneck then identify costly queries and apply indexes & other DB tuning
If app server is choking then, you need to identify & improve the method which is resource heavy.
Performance tuning is not a fast track process, it takes time, identify bottlenecks and try to solve it and repeat the process until you get desired performance.

Related

Windows Server- Is there a way to get CPU Usage over a period by a specific program? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I need to get the Processes consuming CPU the most and over what time. Is this possible using any counter or script?
This at least gets you the info on who's using up the CPU. As to when, well that's another question entiresly.
I think you should configure a data collector set in Performance Monitor (PerfMon). You can collect the counter "\Process(*)% Processor Time". You can roll over the collector files for analysis later and hence see process performance over time.
When you look at the files later the graphs should make it easier to find the process that's consuming more CPU. I can't bang out a full tutorial at the moment, but a simple google search should turn up plenty of instructional info.
I will say the biggest challenge is configuring the schedule just right to make sure you capturing all the data you need. If that starts getting confusing there's a folder buried in Task Manager called PLA. That's for Performance Logs & Alerts. You should find a job there that correlates to your collector. It may be easier to work on the schedule there...
Thanks.

Calling antiviruses from software to scan in-memory images [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Our system periodically (several times a minute) calls an external service to download images. As security is a priority, we are performing various validations and checks on our proxy service, which interfaces with the external service.
One control we are looking into is anti-malware which is supposed to scan the incoming image and discard it if it contains malware. The problem is that our software does not persist the images (where they can be scanned the usual way) and instead holds them in an in-memory (RAM) cache for a period of time (due to the large volume of images).
Do modern antiviruses offer APIs that can be called by the software to scan a particular in-memory object? Does Windows offer a unified way to call this API across different antivirus vendors?
On a side note, does anybody have a notion of how this might affect performance?
You should contact antivirus manufacturers - Some of them do, but you probably find it tricky to find out the pricing even.
Windows has AMSI which has a stream interface and a buffer interface. I am unaware if it makes a copy of the data in the buffer or scans the buffer as it is.
And it will absolutely wreck your performance, probably.
What might be faster would be to just have some code to assure that they are in fact images that can be read and re-encoded, but then there are obvious problems with re-encoding .jpg images, so maybe just sanity check the header and data with them. This could also be slower. decoding large images is slow, but it would probably catch 0 day exploits targeting libpng/libjpeg better.
Also you could read some horror stories of scanning servers like that being targets of malware in otherwise benign files, though the last one I remember is from last decade.

Measuring performances and scalability of mpi programs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to measure scalability and performances of one mpi program I wrote. Till now I used the MPI_Barrier function and the stopwatch library in order to count the time. The thing is that the computation time depends a lot on the current use of my cpu and ram so all the time I get different results. Moreover my program runs on a virtual machine vmware which I need in order to use Unix.
I wanted to ask...how can I have an objective measure of the times? I want to see if my program has a good scalability or not.
In general, the way most people measure time in their MPI programs is to use MPI_WTIME since it's supposed to be a portable way to get the system time. That will give you a decent realtime result.
If you're looking to measure CPU time instead of real time, that's a very different and much more difficult problem. Usually the way most people handle that is to run their benchmarks on an otherwise quiet system.

computer architecture cache pollution [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I read from the Wikipedia is that cache pollution occurs when we access some data once and after that we do not use that data and since precious cache space occupied by such data. Some useful data is evicted in replacement.
is my understanding correct or am I missing something? Can I get more information on cache pollution?
Thanks.
Most cache memories use the last recently used replacement algorithm, i.e. they replace data in the cache that were not used longest. So if you fill the whole cache memory with new data, the data loaded earliest will be replaced, even if they will be used again, and the data loaded later not.
It therefore makes sense to keep the functionality of a cache memory in mind, if data intensive algorithms are developed.
I don't know which Wikipedia article you have read, but here is a good example.

refactor old webapp to gain speed [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
4 years ago, I've built a webapp which is still used by some friends. the problem with that app, is that now it has a huge database, and it loads very slow. I know that is just my fault, mysql queries are mixted all over the places(even in the layout generation time).
ATM I know some about OO. I'll like to use this knowledge in my old app, but I don't know how to do it without rewriting all the from the beginning. Using MVC for my app, is very difficult at this moment.
If you were in my place, or if you will had the task to improve the speed of my old app, how you will do it? Do you have any tips for me? Any working scenarios?
It all depends on context. The best would be to change the entire application, introducing best practices and standards at once. But perhaps would be better to adopt an evolutionary approach:
1- Identify the major bottlenecks in the application using a profiling tool or load test.
2 - Estimate the effort required to refactoring each item.
3 - Identify the pages for which performance is more sensitive to the end user.
4 - Based on the information identified create a task list and set the priority of each item.
Attack one prolem at a time, making small increments. Always trying to spend 80% of your time solving the 20% more critical problems.
Hard to give specific advice without a specific question, but here are some general optimization/organization techniques:
Profile to find hot spots in your code
you mention mysql queries being slow to load, try to optimize them
possibly move data base access to stored procedures to help modularize your code
look for repeated code and try to move it to objects one piece at a time

Resources