Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm currently working on a quite large library (5M lines of code, in C++ under VS2005, 1 solution and close to 100 projects). Even though we distribute compilation, and use incremental linking, recompilation and relinking after small source modifications takes between a few minutes (usually at least 3) and close to one hour.
This means that our modify code/build/debug cycles tend to be really long (to my taste!), and it's quite easy to lose the 'flow' during a build: there's typically not much time to do anything useful (maybe do a bit of email, otherwise read some article online or a few pages of a book).
When writing new code or doing major refactoring, I try to compile one file at a time only. However, during debugging for example, it really gets on my nerves!
I'm wondering how I could optimize my time? I guess I'm not the only one in that situation: what do/would you do?
I don't know much about development at that level, but... it seems like it would be a good idea to separate into multiple solutions. You could have a final "pre-ship" step that consolidates them all into a single .dll if you/your customers really insist.
Compare, e.g., to the .NET Framework where we have lots of different assemblies (System, System.Drawing, System.Windows.Forms, System.Xml...). Presumably all of these could be in different solutions, referencing each other's build results (as opposed to all in a single solution, referencing each other as projects).
Step by step...
The only solution is to start isolating blocks of code. If you don't have too much implementation leakage (see below **) then start building fachades that isolate the classes behind. Move those clases to a different project and make the fachade load the dlls on startup and redirect the calls to factory methods.
Focus on finding areas/libraries that are fairly stable and split them to isolated library dlls. Building and versioning them separately will help you to avoid integration pains.
I have been on that situation on the past and the only way is to take the task with patience.
By the way, a good side effect of splitting code is that interfaces became cleaner and the output dll size is smaller!!. In our project suffling/reorganizing the code around and reducing the amount of gratuitous includes reduced the final output by 30%.
good luck!
** --> a consumer calling obj->GetMemberZ()->GetMemberYT->GiveMeTheData(param1, param2)
#Domenic: indeed, it would be a good thing... However, a whole team's been at it for some time now, and until they succeed we are stuck with a single .dll and something quite monolithic :-(
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
First of all sorry for the codeless question, but I like to clarify one thing.
I have a senior developer in a team who is actively pushing for code quality - merge request reviews, no crappy code and similar. But most of the other guys in the team has - get shit done mentality. Me as a business guy, I do not check the code at all, but If I would not have that guy who cares about the quality - in my opinion we would hit some heavy refactoring cycles at some point.
But of course there is a downside for caring carefully about the quality too much - it just takes time to do that. Maybe we will have to throw a lot of beautiful code when we will have to pivot as business needs changes.
Two questions: a) how do you keep the quality of your product? What practices do you use? b) Where is the line of caring about code quality enough (not too less and not too much)?
Code quality is important independent from whether you develop agile or not. You are absolutely right that quality improvement requires additional time. Most people fail because they spent their time in larger blocks ('refactoring projects') to more or less clean up code in arbitrary places with the objective of reducing the number of quality issues as much as possible.
The process I advise to use is to follow the boy-scout rule of always leaving the code that is changed a bit cleaner (better) than it was before. That means whenever you change a function, procedure, method or other code unit, fix the quality problems in it. The advantage is that you already understood the code (because you had to change it anyways) and it is going to be tested (because you need to test your original change). That means the additional effort for quality improvement (adding a comment, improving identifiers, removing redundancy, ...) is very low. In addition, you are improving only code that you are working with and don't waste time improving code that is never touched anyway.
Following the boy-scout rule ensures that the quality does not decrease, but instead steadily increases over time. It is also a reasonable level of 'caring'. I wrote some more about this here. In addition you need a good quality analysis tool like Teamscale that can reliably differentiate between legacy problems, new problems and problems in the code you recently changed.
Get and enforce a good unit testing framework and back it up with automated integration testing.
As for Agile I found the morning 10 minute scrums to be useful, but the end-of-sprint meetings tended to be long.
I made good experience with Sonar Qube a tool we are using for static code analysis. So we can keep track of
code smells, etc. in our code base. Another point is, that fixing issues can be planed in sprints! An IDE integration is available as well!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a project which requires some complicated components built. Some of these components are promised by some obscure software packages which are proving to be poorly documented and difficult to configure and use.
I am wondering where other people draw the line during their software research phase in deciding whether to build their own packages or sticking with trying the existing packages?
And what percentage of the total project time should I spend on this kind of research?
Thanks in advance,
Alex
Ask yourself which is likely to take longer, hammering the components to fit your needs or writing your own.
Personally I pretty much always use solid, comprehensive libraries (jQuery for web development, DevExpress for WinForms) and fill in the gaps with my own code.
The only exception I remember off the top of my head was a tooltip plugin for a web application. I tried like 3, wasting hours and hours adapting each of them to my needs, even modifying their source code, playing with their images, fixing obscure css tags that baffle ie7 (cause ie8 defaults in ie7 mode on the intranet), but never quite getting it right, then just gave up and rolled my own in half an hour.
Not to say there aren't plenty of good components out there that are flexible enough to be used in active development environments, but you're unlikely to find them in the heat of developing your stuff with deadlines looming overhead. Use your free-ish time to look for them and bookmark them, try them out in a few toy projects and see how they work, so the next time you need something like them you know what to use.
If you have to fix some minor bugs or otherwise have to observe some patterns the code doesn't currently take into account, consider contributing back into the code base as a good citizen.
If you find yourself having to substantially recode some pre-provided code to get it to work, then maybe the fact it was already "coded" is irrelevant. Bite down and chew.
If it's bologna and you need to reinvent the "wheel", consider that you've got a job that may not be compensating it's actual value.
I usually draw the line at about 1/10. Meaning if it has already taken me, say, 1 day and I still haven't gotten the off-the-shelf thing working and it would only take me 10 days to do it myself, I do it myself.
Even when it takes a little longer, it's often better in the long run to avoid the complicated, hard-to-use thing. Or, at the very least, I get a better idea of what I really need and I can pick an off-the-shelf package with my eyes wider open.
Well i think it all depends.
Given that, it is possible for you to spend more time than u would have used for development at trying to configure and understand. I would say if you are good enough to create faster than learn the improperly documented on then go for it.
Else if the existing promises great features and will not take too much of the entire project time then go for it. Often it is very difficult to draw then line. It all depends on the situation at hand.
Also you could look for alternatives to what u have now.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
You work on an important project that contains 7 independent modules, and you do not have enough time to finish the project in time. In front of you is a choice: to complete 3 modules in full or to start work on all the 7 modules in parallel, but not complete any of them within the planned period. Which strategy you choose?
Explain the situation to the manager, let him decide. That's why he's there.
If you're in charge, then you need to consider all possibilities. I'd say, focus on what the customer will need in the first turn, and complete the second-priority functionality afterward. If possible, talk to them, explain and try to reach an agreement. The customer may have a different opinion on what he will need the most and give you direction to focus your efforts in.
Whichever is better for a customer/business. If it's possible to have all 7 "feature" in semi-complete state -- then go for it. If they prefer 3 polished "features", go this way.
It depends on what your customer values.
Are the modules really independent? Are they really needed in full, or can they provide value even if implemented partially?
A useful strategy is to implement vertical slices accross the system or even a module, not horizontal layers of modules. Implement one end-to-end feature/use case/user story at a time. It is these features that bring value to your customer, not modules (unless the customer is an odd one valuing modules and not features). This way you get something useful ready for testing and release, and your time is not spent on writing code that is not used by anyone. However, when adding new features, you need to keep on refactoring the codebase in order to avoid the stovepipe system anti-pattern.
In any case, implementing the 7 modules only halfway there is not the answer. Whatever you do, do it right the first time. ("Right" being of course context-dependent: different standards apply for throwaway prototypes, life-critical production code and everything in between.)
Complete the 3.
Then they can be released for testing to the client, and you can get to work on the other 4.
That very much depends on your development model and the customer requirements. In an agile environment I'd rather show the complete product (even with unfinished/mocked parts), so the customer gets an impression of it in its entirety and can give you early feedback on the unfinished modules.
If there are clear, precise specs however, then delivering the 3 finished modules is probably a better idea.
Client has a clear picture of exactly what he wants, and my job is to show him something that will attract his attention and give me additional time to complete the project.
Each module requires one month of work, but after three months the client decides whether to continue the cooperation or not.
The user interface is the only thing that interests him. I can not explain to him that I spent two months making engine if he does not see it on the screen.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Given a situation where you have 2 projects that in total will provide enough work for a month for 6 developers in a ratio of 2:1.
Is it better to assign developers to each project and then they work on that project for the whole month or is it preferable for the whole team to work each project in turn?
What reasons do you have for your opinions?
Edit
To clarify, they are entirely separate systems.
It depends a lot on how related the two projects are. If they have a lot of similarities, I would say tackle them as two projects within one large group.
If they mostly unrelated from a code and architecture standpoint, it would make more sense to split into two teams for the duration of the two projects, perhaps cross-training some of the developers as is possible.
If you're an Agile shop, just run two concurrent iterations.
If the projects are in separate systems I believe it is wise to separate the developers working on the projects since humans are not entirely great at multitasking because our context switching skills are slow. So either they work one project after another or split up the team to each project.
The answer to this really depends on a lot of factors. If you're 100% certain that you won't lose a developer, and that no one will go on a long vacation, then splitting it into specific teams has some advantages. OTOH, if you don't know that for sure, it might be better to have the whole team work on each project.
Next, you have to consider deadlines - will running the projects sequentially increase your risk of missing a deadline on either one, and if it does, is that risk acceptable?
Of course, there's always the potential for developers to step on each others' toes. 2 man-months (yes, I know it's a myth) of work split amongst 6 developers is a little over one week's work per person - if that's reasonable for the size of the project, then that's fine. However, there is a limit to the size you can split the work into and still make sense.
Answer those questions for your project, and that should give you a decent answer.
I think that in the case of unrelated projects you will have a far better efficiency if you split your developers across projects. This is because there is overhead in communication, if you double the number of developers on a project you don't half the time it takes.
However if your developers need to learn both systems eventually then this overhead needs to happen at some point. time constraints should dictate whether that occurs during or after the project.
I would split them into 2 teams assuming that there are enough developers to cover each project (Each project gets 3 developers and 3 developers is enough for each project). I don't think it's effective to put more developers on a project than is necessary.
Edit:
This is not taking into account all the many other factors that go into this type of decision (developer productivity, skill, availability)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Why should companies invest in refactoring components, though it is not going to add any new feature to the product ?
I agree it is to clean the code, fix bugs and remove dead code - but what is the take ??
Maintenance. It will reduce your maintenance costs significantly. There is no comparison between fully factored code and the junk that sits in most companies repositories. The latter is virtually worthless, while the former is gold.
It depends who you ask. A non-technical manager may say there is no need. A support developer would say that it would help keep the maintenance costs down.
Refactoring needs to be a part of your every-day job. You constantly refactor your code to make it more readable/maintainable/robust/reusable, etc.
Your code is a living document. If it doesn't change over time, it becomes stagnant.
Invest in testing. Invest in refactoring. Invest in writing good code.
maintenance. Sometimes, a project gets to big or with too many "fast" patches to be further expanded. You just have to sit down calmly and clean and refactor.
While the other answers are all true the power of Refactoring is that it allows you to change the design of your code with predictable results. The biggest problem of maintenance is that it is virtually impossible to anticipate all requirements for complex applications.
Most of these can be dealt with by adding a new feature like a new report or command. But other will require part of your application to be redesigned. This is where refactoring and it's sibling unit-testing comes into play. By using refactoring techniques you can make needed design changes safely.
It is not a cure all technique but another tool that improves the quality of your code. (stuructured programming, object-orientation, etc).
To start off with: Refactoring is a tax. If the code works, then you are spending time fixing code that already works, I can see the business types looking quizzical now. A saying I like is "Legacy is another word for code that works."
Now there are many problems with a growing code base that need to be addressed before you start to spend more time maintaining the code that developing features.
Personally I like the "No Broken Windows" philosopy.
If it ain't broke don't fix it.
But if you need to start fitting the components into new unpredictable requirements then it often makes sense to identify the bits you can extract and reuse. You need to be certain that your changes are introducing unexpected bugs - so you'll need good test coverage.