Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is that worth to learn LINQ for hobbyists or the professionals (who are free from formal job requirement)? We can achieve the same thing that LINQ does via traditional language functionality such as loop, array, etc.
Why bother to duplicate the way to achieve the same objective?
I find LINQ tends to let you focus more on intent. In other words, what is the code doing rather than how is the code doing it. Or to put it yet another way, it's more of a declarative form of programming (like functional programming) rather than imperative programming style.
I've found using LINQ to be very beneficial, from the standpoint that I don't have to write the for loop to iterate over a collection and perform some operation. Rather, I can use a single line LINQ statement to do that for me.
One simple example could be someCollection.OrderBy(c => c.propertyOne); Doing that on your own would take a bit more code.
LINQ isn't so much something you need to learn, it's just something that you probably just will learn. It's now just part of the framework along with some cool additional syntax. If you aim to write clean, maintainable code then you will no doubt want to write code which leverages LINQ extensions.
Apart from the syntactic sugar of "from ... select ... where" in C#, you'll find that LINQ is leveraging other bits of the framework/languages anyway - that is, extension methods, enumerators, lambda expressions, and delegates. All of these are increasingly hard to avoid anyway.
When it comes to using frameworks that provide their own LINQ proimplementations, for example, Entity Framework or LINQ-to-SQL, then that's another story. I would learn those based on requirements. In your case, if your side-project needs some DB CRUD stuff then you might look at either of those.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
As far as I know the main difference between declarative and imperative programming is the fact that in declarative programming you rather specify what the problem is, while in imperative programming you state exactly how to solve a problem.
However it is not entirely clear for me when to use one over another. Imagine you are imposed to solve a certain problem, according to which properties you decide to tackle this down declaratively (i.e using prolog) or imperatively (i.e using Java)? For what kind of problems would you prefer to use one over the other?
Imperative programming is closer to what the actual machine performs. This is a quite low level form of programming, and the more complex your application grows, the harder it will be for you to grasp all details at such a low level. On the plus side, being close to the machine, you can write quite performant code if you are good at that.
Declarative programming is more abstract and higher level: With comparatively little code, you can express quite sophisticated relationships in a way that can be more easily seen to be correct.
To see an important difference, compare for example pure Prolog with Java: Suppose I take away one of the rules in a Prolog program. I know a priori that this can make the program at most more specific: Some things that held previously may now no longer hold.
On the other hand, suppose I take away a statement in a Java program: Nothing much can be said about the effect in general. The program may even no longer compile.
So, changes in an imperative program can have very unforeseen effects, and are extremely hard to reason about, because there are few guarantees and invariants, and many things are implicit in some global state of the program. This makes imperative programming very error-prone.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
while browsing for a contains method, I came across the following Q&A
contains-method-for-a-slice
It is said time and again in this Q&A that the method is really trivial to implement. What I don't understand is, if it were so easy to implement, and seeing how DRY is a popular software principle && and most modern languages implement said method , what sort of design reasoning could be involved behind the exclusion of such a simple method?
The triviality of the implementation depends on the scope of the implementation. It is trivial to implement when you know how to compare each value. Application code usually knows how to compare the types used in that application. But it is not trivial to implement in the general case for arbitrary types, and that is the situation for the language and standard library.
Figuring out if a slice contains a certain object is an O(n) operation where n is the length of the slice. This would not change if the language provided a function to do this. If your code relies on frequently checking if a slice contains a certain value, you should reevaluate your choice of data structures; a map is usually better in these kind of cases. Why should the standard library include functions that encourage you to use the wrong data structure for the task you have?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I love functional programming and I love Ruby as well. If I can code an algorithm in a functional style rather than in a imperative style, I do it. I tend to do not update or reuse variables as much as possible, avoid using "bang!" methods and use "map", "reduce", and similar functions instead of "each" or danger loops, etc. Basically I try to follow the rules of this article.
The problem is that usually the functional solution is much slower that the imperative one. In this article there are clear and scary examples about that, being until 15-20 times slower in some cases. After reading it and doing some benchmarks I am afraid of keep using the functional style, at least in Ruby.
By the other hand I feel more comfortable writing code in functional style because it is smart and clean, it tends to less bugs, and I think is more "correct", specially nowadays that we can use concurrency and parallelism for better performance.
So I am very confused about which style to use in Ruby. Any wise recommendation will be appreciated.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
When you are rapid prototyping for features should you really worry about code quality & optimization?.
Looking back at the number of times a "prototype" ended up becoming the product, the answer would be yes.
Don't forget that you are not only prototyping the feature, you are also prototyping the design.
Yes to quality. No to optimization. This question should be community wiki.
I would focus on clarity.
If quality and optimisation are requirements for the prototype then yes. If not, then no. Just because you are doing rapid prototyping you don't abandon standard operating procedures like programming to a specification, using source code control, testing, etc. It is, perhaps, relatively unusual for high performance to be a requirement for a rapidly developed prototype, but that's another matter.
Yes. Focus on quality, clarity and simplicity AND comments to explain what its doing and why (don't bother with the how, unless its really complicated, that is what the code is for).
Just about all work we do here starts out as a what if? And if it works we continue with it.
We write comments that describe what should happen, before we write the code, then write the code to match the comments. Writing the comments first forces you to think about how you will structure it all. We've found that it prevents a lot of FALSE assumptions and actually makes development faster.
It also makes reusing this when you come back to it so much easier - you don't need to read the code and understand it, just read the comments. Don't go for the nonesense of self-documenting code, all that does is self-document the bugs, you've got nothing to check to see if the code matches the comments/documentation at all.
You can worry about optimization later - see this description of a huge win I got by changing from MFC CMaps to STL when working on a hobby project parsing some Apache log files. This was done after I had the initial concept working and only when it became apparent there was a problem with performance.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
i'm currently in the process of modifying a legacy editor application and i need to add in a few data structures which i have made into a class of it's own which i later add to a collection object. but thus far i'm little bit blurry on where to put all of my functions which is related to that object. i'm thinking maybe OO like design, but i'm not quite sure how to do this in vb6. else all the functions are currently in a module mdl file. which are set as public function.
is there any good reference, book or whatever which i can learn more about how to properly design for vb6 app? for the current work and for future work i guess.
thanks.
Are you familiar with Rocky Lhotka's work? I would recommend reading Visual Basic 6 Business Objects.
Visual Basic 6 Business Objects provides a thorough introduction to employing objects that are used to model real-world business problems.
You can also visit www.lhotka.net
Edit :
I know it sounds like a lot of trouble, but I would really recommend you take the time and read Rocky's book. He talks about simulating OOP principles eg. like simulating inheritance in vb6 ect.
Another good source of information is Deborah Kurata, she's written a series of books about OO coding in VB. Less well known than Rocky Lhotka (who is excellent), and concentrates more on pure OO, not the ORM/DB layer that he does.
All the reference cited so far are good. However the Design Patterns by the Gang Of Four is usable for Visual Basic 6. The trick to remember that most of the pattern talked about in Design Patterns rely on implementing interfaces which VB6 can do well. In fact you will find most design patterns involve implementing interfaces.
This is because most design patterns focus on setting up how various objects interact as opposed to reusing behavior. So interface become much more important.
Design Patterns by the GoF
Patterns by Martin Fowler
The various GUI and presentation patterns is the most applicable in my opinion.
My own application is structured completely as a series of design patterns. For example I use a Passive View for my presentation layer. The various views called command objects which does the actual modifications of the model. I use factories to retrieve the list of reports, file types, and shapes my software support. All done in VB6 using the Design Patterns book by the GoF.
Before getting deep into the theological aspects of OOP those books cover you might begin by simply reading the VB6 documentation. In particular the sections on component design.