As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a team lead who seems to think that business logic is very subjective, to the point that if my stored procedure has a WHERE ID = #ID — he would call this “business logic”
What approach should I take to define “business logic” in a very objective way without offending my team lead?
I really think you just need to agree on a clear definition of what you mean when you say "business logic". If you need to be "politically sensitive", you could even craft the definition around your team lead's understanding, then come up with another term ("domain rules"?) that defines what you want to talk about.
Words and terms are relatively subjective -- of course, once you leave that company you will need to 're-learn' industry standards, so it's always better to stick with them if you can, but the main goal is to communicate clearly and get work done.
One way to differentiate is that "business logic" is something the customer would care about and that could be explained to a customer without referring to computer-specific words.
You could try to argue your point with a timed example, run a sql select against an indexed table and then run a loop to find exactly the same item in the same set but this time in code. The code will be much slower.
Let the database do what it was designed to do, select sets and subsets of data :) I think realistically though, all you can do is get your team together to build a set of standards which you will all code to, democracy rules!
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been programming for the last 6 years. I just recently started my first degree in computer science. My work seems to be constantly marked down for different reasons, amongst many of them:
Uncommented code
Writing too long identifier names and methods
Writing too many methods
After working as a programmer for six years for numerous startup companies, and absorbing best practices which include the requirement to write "self explanatory code" I find it very difficult to go back to bad practices.
What can I do?
Self documented code is not synonymous with comments.
I've argued with many senior devs around this point. Code can go a long way in communicating intent but there are some things which simply cannot (and should not) be documented through code.
For example if you have a highly optimised function/method or chunk of code which is heavily coupled to the underlying problem domain and requires very specific knowledge of the business or solution. Comments are needed in these scenarios.
Yes, yes, comments come with there fair share of problems but this doesn't mean they aren't helpful (or mandatory in certain cases).
I can't tell you how many times I've read a colleagues line of code and thought "what the hell?!?" only for them to explain that they needed to do that due to some quirk of some library or browser we were targeting etc.
Comments are a mechanism for a developer to justify a design decision.
As for your other problems, they are subjective. How long is too long? How many is too many?
Point them at Microsoft's guidelines if you are on the MS stack or there will be countless articles for whichever language you're using...
Hope that helps.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What programming language/method would be best suited to writing a desktop app that
filters question types and displays a listing of those questions to view.
For example, if I have a mix algebra, geometry, and calculus questions stored in the app,
I should be able to select just the algebra questions to view and print.
I have a little experience with python/django but I've never made a desktop app before.
You have lots of options. You will need to make several design decisions before you move forward. Things to consider are:
Which technologies do you feel comfortable with?
How much time/effort do you want to put into the project?
Are you willing to spend money on tools?
Etc.
That being said, the rest of this answer is to give you some options to consider:
You'll need a data structure which can filter the problems for you.
From your description, the first thing I thought of was using a
database, however I'm not sure if you are familar with databases, in
which case you'd have to create some classes/structs that would allow for you to do the filtering yourself. Some options for databases are SQL Express, Oracle, MySQL, DB2, and many more.
Another thing to consider is you mentioned several different type of
math problems. You'd want to consider how you would be displaying
the problems. Mathematica formats math problems nicely, but if you
wanted to go down this road, you'd either have to find a tool that
would allow you to display that math problems in a syntax like
Mathematica or do exports/screen shots of the problems and have those as
part of your program.
Another option would be to try to find a
language that has some sort of plugin for TeX or LaTeX (For example,
you can see how wikipedia allows for nice math formatting here:
http://en.wikipedia.org/wiki/Help:Displaying_a_formula
This sounds like a good pet project to play with to learn different technologies. If that is the intent, great. If not, then you might want to do some googling to see if someone else has already created what you are looking for.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We are about to move a project on apache cassandra from test to pilot and as a rdbms team, we were propably missing something.
Basic rules (or lessons learned):
be sure you have big or almost no data (nothing between)
do not believe in extremely cheap storage (cheap or not expensive might be
better)
think of your primary key as it was a reverse index
think of time (or another data creation order) as it was a row/clustering key
forgot about 100% foreign keys whenewer you can
sample if you can
do not care about dups
json and asynchronous time aggregation on client can make cpus more relaxed
ETL:
sample history if you can (or sample it just for reporting usage on separate reporting cluster)
single threaded data streams spreaded over couple of servers will come in hand
if you can afford asynchronous processing you can profit from knowledge of data patterns
throw scrap data away (horizontaly and vertically) - or it will mislead BI people or even board members in worse case
do not care about dups
The question is am I still missing something?
Are there another ways to achieve even better performance?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
LINQ is extremely powerful and can be used highly in code. But is it best practice to use it?
It's a good idea to use it when it makes the code clearer and simpler to maintain, and when you're not in any of the situations where the performance of LINQ is too slow for your needs.
You haven't specified whether you're talking about LINQ to Objects or LINQ to SQL etc, but I know there are situations where the latter has proved too slow for some high traffic sites, and they've moved off it... but only after it's been shown to be an issue. LINQ to Objects will often have a very small performance hit compared with "hard-coding" the same logic, but that's even less likely to be a real problem.
Of course LINQ can certainly be overused, and I've seen people reaching for a LINQ solution when there are far more appropriate ways of achieving the same thing - so don't try to use it everywhere you possibly can. Just use it where it clearly helps.
the declarative nature of linq is one of its strongest features. Almost always this makes your code more readable and maintainable, so yes, unless there is a compelling performance reason not to, I'd say that it is best practice.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Christoph Koutschan has set up an interesting survey that tries to identify the most important algorithms "in the world". Since one of the criteria is that "the algorithm has to be widely used" I though that extending the survey to the huge group of users at Stack Overflow would be a natural thing to do.
So, what do you think? Which algorithms deserve a place in the Algorithm Hall of Fame?
I somewhat like this algorithm:
Write code.
Test code. If buggy, go to step 3. If not, go to step 4.
Rewrite code, then go back to step 2.
Get somebody else to test your code. If they discover any bugs, return to step 3, otherwise go to step 5.
Congratulations, your code has no obvious bugs! Now you wait for a user to stumble upon a hidden one, in which case you return to step 3 once again unless you're lucky and are no longer providing support for the code in question.
I'd say binary search since it's usually the first algorithm people learn. And the RSA encryption algorithms are pretty important.
Hashing, since it's the basis for so much in security, data structures, etc. Hashing algorithms have generated a lot of Ph.D. dissertations.