Approach to speedup DB-centric app [closed] - performance

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We have an application where in I have 65 GB of Data in MSSQL Server.
with around 250 tables and 1000 stored procedures and functions.
Now the application is complete DB specific with almost all the logic coded in procedures and functions. Some of the Stored procs take as long as over 4-5 minutes to execute. Now we have been given the task of optimizing/re-engineering these slow running stored procs.
We have not much info about the project/schema/design but we have access to the schema and data and we fortunately have to deal with just a module to optimize which is slow. (But that deals with many SPs and functions running over 1000 of lines.. encompassing application logic..)
My question is how do I get started with such a project. We have been set some unrealistic deadline of coming up with fixes in 2-3 days and i have already spent a day in setting things up!
What should be the approach:
Suggest increase in hardware infrastructure.
Re-engineer app (push some of the computations to the app side) make it less DB-centric ?
Ask for more time (how much) to optimize this ? Funny thing is we are not the original coders and have very less idea about the App i.e. whats coded in the SPs and functions.
Thanks

You'll need to know the problem areas before you can attempt any fixes.
You say you are just looking at one module to begin with, then I'd suggest using things like SQL Profiler to determine the frequency with which statements are executed and also times taken to execute and use this data as a starting point to see if the logic can be optimised.
Look for any operations that use cursors that could possibly benefit from a more set based approach.
As for your three options, I'd say you HAVE to go for (3) because you've stated you don't have a thorough understanding of the app, so you'll need to gain some further exposure in order to establish where to focus your efforts. I don't think (1) is a long term solution although it would obviously provide some benefit (how much determines current and proposed specs). You'll only have an idea if (2) is a valid option once you've had a chance to establish the problem areas first.
Best of luck.

Related

When a optimization is no longer "Micro-optimization" [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm a team-leader/feature-architect that emerged from a developer position, so I have coding experience and a lot of the things being evolved now a days were implemented by me in the first place. Now to the point: Reviewing some code for the sake of refactoring (and some nostalgia) I've found a bunch of places that could be optimized, so as an exercise I gave myself 2 days to explore and improved a lot of stuff. After running a benchmark I found out that the general module performance had improved about 5%.
So I aproached some colleagues (and the team I run) and presented my changes. I was surprised by the general impression of "micro-optimization". If you do look at every single optimization, then yes, they are micro, but if you look at the big picture...
So my question here is: When is an optimization no longer considered "micro"?
Whether an optimization is micro or not is usually not important. The important stuff is whether it gives you any bang for the buck.
You wrote you spend two whole working days for a 5% performance increase. Did you spend those days wisely? Was those things you fixed the "most slow" part of your application, or at least those most easy to fix performance issues? Did your changes made you reach your performance target (that you didn't do before)? Does 5% performance matter at all in your case? Usually you want something like 100% or 1000% increase if you figure out that you need to improve your performance.
Could you perform your optimizations without disturbing readability and/or maintainability of the code?
Besides, what other costs did those optimizations render? How much regression test were you required to perform? How many new bugs did you create?
I know, this looks more like questions than an answer, but those are the kind of questions that should rule your decision to make an optimization or not.
Personally, I would differentiate between changes that lead to a reduction in algorithmic time or space complexity (from O(N^2) to O(N), for example) and changes that speed up the code or reduce its memory requirements but keep the overall complexities the same. I'd call the latter micro optimizations.
However, keep in mind that while this is a precise definition it should not be the only criterion for deciding whether a change is worth keeping: Reduced code complexity (as in difficulty to understand) is often more important, especially if speed and memory requirements are not a major cause of concern.
Ultimately, the answer will depend on your project: For software running on embedded devices the rules are different than for stuff running on an Hadoop cluster.

KISS & design patterns [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm presented with a need to rewrite an old legacy desktop application. It is a smallish non-Java desktop program that still supports the daily tasks of several internal user communities.
The language in which the application is both antiquated and no longer supported. I'm a junior developer, and I need to rewrite it. In order to avoid the app rewrite sinkhole, I'm planning on starting out using the existing database & data structures (there are some significant limitations, but as painful as refactoring will be, this approach will get the initial work done more quickly, and avoid a migration, both of which are key to success).
My challenge is that I'm very conflicted about the concept of Keep It Simple. I understand that it is talking about functionality, not design. But as I look to writing this app, it seems like a tremendous amount of time could be spend chasing down design patterns (I'm really struggling with dependency injection in particular) when sticking with good (but non-"Group of Four") design could get the job done dramatically faster and simpler.
This app will grow and live for a long time, but it will never become a 4 million line enterprise behemoth, and its objects aren't going to be used by another app (yes, I know, but what if....YAGNI!).
The question
Does KISS ever apply to architecture & design? Can the "refactor it later" rule be extended so far as to say, for example, "we'll come back around to dependency injection later" or is the project dooming itself if it doesn't bake in all the so-called critical framework support right away?
I want to do it "right"....but it also has to get done. If I make it too complex to finish, it'll be a failure regardless of design.
I'd say KISS certainly applies to architecture and design.
Over-architecture is a common problem in my experience, and there's a code smell that relates:
Contrived complexity
forced usage of overly complicated design patterns where simpler
design would suffice.
If the use of a more advanced design pattern, paradigm, or architecture isn't appropriate for the scale of your project, don't force it.
You have to weigh the potential costs for the architecture against the potential savings... but also consider what the additional time savings will be for implementing it sooner rather than later.
yes, KISS, but see http://www.amazon.com/Refactoring-Patterns-Joshua-Kerievsky/dp/0321213351 and consider refactoring towards a design pattern in small steps. the code should sort of tell you what to do.

How to compare performance between different languages? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Assuming I wrote a program in two different languages and I need to compare the performance,
what aspects should I focus on other than comparing their running time?
You're focusing on the wrong thing. Really the question is do you want to use a low level language (faster) or high level language (slower). A high level language is going to do many things for you and it will also make certain assumptions which will make it slower. With a high level language you are going through multiple layers of abstraction and therefore it is going to be naturally slower. If you want top performance, use c++. If you want something even faster use assembly language. A high level language like c# or java is going to be more convenient as a lot of the underlining plumbing is handled for you, but with that comes a performance decrease. Again this comes with certain assumptions that are made and extra code that is executed that might not pertain specifically to what you are trying to accomplish.
If you want to test the performance of different language pick functions that might require the language or platform to handle many of the underlining functions for you. Also lower level language tend to give you direct access to hardware, allowing you to tweak how you interact with it. Gaming engines and other items that require top performance are typically written in c++ vs a language like Visual Basic because of the amount of control and the increased performance of using unmanaged code. I would focus first on categories like (graphics, etc.) that you would need increase performance for and then pick some tests from there. I'm also sure you can find existing tests already posted on the internet that compare language performance.

Managing code transitions between developers [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are your best practices for making sure newly hired developers quickly get up to speed with the code? And ensuring developers moving on don't set back ongoing releases.
Some ideas to get started:
Documentation
Use well established frameworks
Training / encourage mentoring
Notice period in contract
From a management perspective, the best (but seemingly seldom-follow) practice is to allow time in the schedule for training, both for the new employee and for the current developer who'll need to train them. There's no free lunch there.
From a people perspective, the best way I've seen for on-boarding new employees is to have them pair program with current developers. This is a good way to introduce them to the team's coding standards and practices while giving them a tour of the code.
If your team is pairing averse, it really helps to have a few current diagrams for how key parts of the system are structured, or how key bits interact. It's been my experience that for programs of moderate complexity (.5m lines of code), the key points can be gotten across with a few documents (which could be a few entity-relation document fragments, and perhaps a few sequence diagrams that capture high-level interactions).
From the code perspective, here's where letting cruft accumulate in the code base comes back to bite you. The best practice is to refactor aggressively as you develop, and follow enough of a coding guideline that the code looks consistent. As a new developer on a team, walking into a code base that resembles a swamp can be rather demoralizing.
Use of a common framework can help if there's a critical mass of developers who'll have had prior experience. If you're in the Java camp, Hibernate and Spring seem to be safe choices from that perspective.
If I had to pick one, I'd go with diagrams that give enough of a rough map of the territory that a new developer can find out where they are, and how the big of code they're looking at fits into the bigger picture.

What % of programming time do you spend debugging? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 13 years ago.
What % of programming time do you spend debugging? What do you think are acceptable percentages for certain programming mediums?
About 90% of my time is spent debugging or refactoring/rewriting code of my coworkers that never worked but still was commited to GIT as "working".
Might be explained by the bad morale in this (quite big) company as a result of poor management.
Managements opinion about my suggestions:
Unit Tests: forbidden, take too much time.
Development Environment: No spare server and working on live data is no problem, you just have to be careful.
QA/Testing: Developers can test on their own, no need for a seperate tester.
Object Oriented Programming: Too complex, new programmers won't be able to understand the code fast enough.
Written Specs: Take too much time, it's easier to just tell the programmers to create what we need directly.
Developer Training: Too expensive and programmers won't be able to work while in the training.
Not a lot now that I have lots of unit tests. Unless you count time spent writing tests and fixing failing tests to be debugging time, which I don't really. It's relatively rare now to have to step through code in order to see why a test is failing.
How much time you have to spend debugging depends on the codebase. If it is too high, that is likely a symptom of other problems, e.g. lack of adequate exception handling, logging, testing, repeatability etc. What counts as "Too high" is subjective.
If you do have to debug an error, think about making a failing test before you fix it, so that the error does not recur.
The worst that I have had to work on was a large and complex simulation written entirely without tests. Sometimes it failed in the middle of a run, and to reproduce a crash involved setting a breakpoint, starting the run and waiting half an hour or more. Then make a change and repeat. Don't ever get yourself into that morale-sapping and productivity-destroying situation.
There is so much variety when it comes to writing software that it's impossible to give you a solid answer. Complexity of the software can increase debugging time, for example, if the codebase is very large and the code itself is poorly written, then that could increase the amount of time spent debugging.
One way to reduce the debugging time is to write unit tests. I've been doing this for a while and found it helps reduce the number of bugs which are released to the customer.

Resources