Introduction to algorithms [closed] - algorithm

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
How does reading the book Introduction to algorithms(CLRS) help me? How's learning this course connected with the other areas of theoretical computer science?(I mean intutions and insights if any that I could get).
I'm new to this concepts.I am getting bored of the sorting algorithms that I was learning in the course right now.I wanted to have a broader view while learning the course.It would be very helpful to me, if you could provide me with a structure on how things go.Thanks in advance! :)

Algorithms are the practical application of theoretical knowledge in computer science; they're the most theoretical part of the engineering side of computer science, so to speak. Without the study of algorithms, anyone in software would either be an amateur - because computation is useless without efficiency - or wouldn't produce much of anything since he would have to focus on solving problems all the time instead of actually writing implementations that are known to solve problems.
From a didactic point of view, algorithms are a distillation of theoretical knowledge into a precise expression. You may understand what graph traversal is and how strongly connected components should be contracted; if you try to give a succinct form to those thoughts, the best way to do it is writing down an algorithm that does what you want.
On a formal level, they help us understand the concepts we grapple with; when we claim some problem can be solved in this or that complexity, we need an algorithm to prove it. For example, if you read that sorting is in O(n log n) in the general case, you can just go ahead and believe your professor; maybe you even have an intuition why that might be true. But to actually prove it, you need an algorithm that solves sorting for which you then prove that it runs in O(n log n) in the general case. So on the theoretical level, algorithms help us classify problems according to their complexity (read: "difficulty").

I'm not really sure that this question has a specific answer and that this is the right place to ask it, but it is still a useful one. Aside from trusting the people that have spent much of their lives guiding people to learn a skill set they will use for the rest of their lives (your professors), I have always looked at algorithm design as a way to learn how to think more clearly. This is something I believe everyone can learn from.
Also, when I was a student there were many times I was frustrated with what I was being asked to learn (believing that it is a waste). Virtually all of which I have found to be very useful and use frequently. Thinking back, I wish I had given some of my professors much more credit then I did when I was in school.

Related

What is the benefits of organizing programming code? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Why do we need to organize code? What are the objectives of organizing code. Organizing code is a time consuming process until it becomes habit. I am trying estimate the cost and benefits of organizing any programming code
Imagine you are at the library; none of the books at the library are organized. If your work depends on finding references in books, you will waste a lot of time searching for the books. This may be a quick process if you have only a few hundred books, but when you have thousands or tens of thousands of books, you will need to ensure the books stay organized in order to efficiently locate them. You could also say "Organizing books is a time consuming process", but the end result is that it saves you time when/if they are kept organized.
The same thing happens as software becomes more complex. People won't want to add programs which are not well organized to well organized programs/codebases. It's hard to use/maintain programs which are complex and organized poorly (or not at all).
One of the biggest problems if you are faced with organizing a codebase is that it's very monotonous and time consuming -- it's easy to (unknowingly) introduce changes which result in bugs; these changes should receive significant testing (but it's not likely that a disorganized codebase has high test coverage). Disorganized programs which are reused and/or have long lifetimes usually require significantly more maintenance time over the life of the program.
If you're just banging out a proof of concept that is 100 lines and will remain independent of all other programs, you don't have to obsess over the organization of that program.
Organized code becomes much easier to maintain and extend over time than code that is placed wildly about. That's why programmers take so much care to name variables/methods/etc. well, keep methods short and specific, and so on. I would recommend reading Clean Code by Robert Martin.

TDD Naïve Text Search Algorithm [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I need to test drive Naïve string search algorithm.
http://en.wikipedia.org/wiki/String_searching_algorithm
Can someone shed some light on how I could approach the issue.
should my tests only be testing outside behaviour? (i.e. the pattern occuring indexes irrespective of the algorithm used? )
Or should I be algorithm specific and test drive algorithm specific implementations?
Or should I be algorithm specific and test drive algorithm specific implementations?
This largely depends on how your class will be used. Testing public contract is usually the way to go (and it's fairly easy to write decent tests for that), so unless your clients can somehow use implementation details knowledge, I'd stick to that.
Note that having specific algorithm on paper could help pinpointing few basic tests, without writing strictly implementation related tests, like:
invalid input (empty strings, nulls)
input being too large/too small (like, pattern exceeding searched string length - what do you do then?)
valid input, yet matching nothing
This should give you basic entry point for more implementation specific testing. Keep in mind that utilizing data driven testing can help you avoid the need of having implementation level knowledge altogether, and with large enough data set might be just enough to verify algorithm correctness aswell.

Minimum Knowledge required in Datastructres [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have found numerous data-structures on wikipedia, also have also looked into several books in data-structures and found that they vary. I want to know what are the basic or minimum list of data-structure knowledge a new CS graduate should have?
Also is it necessary to know their implementation in more than one programming knowledge considering there is a difference in the implementation. If i know the implementation of Linked list in C should i know its Java based implementation?
It would be great if you could help me understand categorically:
Basic datastructures(necessary for a CS grad)
Advanced Datastructures
Edit : i am more interested in the list of data structures.
Have a look at Introduction to Algorithms by Cormen et al. In my experience, if you know what is in there you are set for anything coming at you.
I would not consider knowing any implementation very useful. If you know the basics you should be able to implement your own version quickly, but chances are you will never have to because there are libraries for that. So the rule for practice is: know your libraries!
Even so, it is important that you know properties of data structures (e.g. space overhead, runtimes of central operations, behaviour under concurrent accesses, (im)mutability, ...) so you will always use the one best suited to your task at hand.
This question is really a little too broad, even the way you've narrowed it down, because it depends on what sort of future path you're looking at. Grad school? PhD track? Industry? Which industry?
But as a rough minimum, I'd say, take a look at CLRS (as Raphael suggests) and pick out the following:
Linked lists, and the variations like stacks, queues, etc.
Basic heaps
Basic hash tables
Trees, especially including binary search trees, and preferably familiarity with at least one self-balancing BST
Graphs, both matrix-representation and adjacency list representation
And probably some more based on what sort of job you're looking for. As someone on a PhD track... well. All of them. At some point you will take a qualifier and be expected to know most of them.
Check out the MIT's OCW Intro to Algorithm Course It is great tutorial theoretically.
For practicing data structures in Java check : Data Structures & Algorithms in Java by Robert Lafore, it is excellent.
Implementation in one language is sufficient, but try to solve it in structured-oriented language like C and OO language like Java/ C++. This will help a lot while preparing for interviews.
One good resource for basic data structures in C : here

Resource for learning Algorithms for non-CS/Math degrees [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I've been asked to recommend a resource (on-line, book or tutorial) to learn Algorithms (in the sense of of the MIT Intro to Algorithms) for non-CS or Math majors. Obviously the MIT book is way too involved and some of the lighter treatments (like OReilly's Algorithms in a Nutshell) still seem as if you would need to have some background in algorithmic analysis. Is there resource that presents the material in a way that developers who do not have a background in theoretical computer science will find useful?
I think the best way to learn algorithms are through the various competition sites.
USACO - my personal favorite, as it gives a clear path through the material
TopCoder - already mentioned
Sphere Online Judge - great if you want to work in another language other than C/C++/Java
As far as books, the best single intro I've seen for the non-math specialist is Data Structures and Algorithms. It takes you through an algorithm line by line and shows you how it decomposes mathematically, something CLRS's otherwise excellent analysis section is a little less clear on.
Skiena's Algorithm Design Manual is also excellent, as is his Programming Challenges, which is essentially a tutorial through the Valladolid Online Judge.
Honestly, though, I think the single most helpful thing a beginner can do is to implement the various algorithms -- merge sort, say, followed by Quicksort -- and time them against variously sized inputs. Create a spreadsheet with a graph that shows their growth over time. Very few non-specialists will have the patience or the know-how to set up a recurrence relation and solve their way through it. But you must understand the effect of, say O n^2 growth over time, and there's no better way to learn this than to watch your own program blow through its memory stack. :)
I say this as a non-CS, non-math programmer who has spent a good couple of months wrapping my mind around algorithmic analysis.
I'd go for the Algorithm Design Manual, by Steven Skiena. It's very readable and starts with the basics in an easy-to-understand way. For example, it explains big-O notation very well. The emphasis is on practical application, which is a big bonus for beginners coming from a non-theoretical field.
The second half of the book is a reference of common algorithm problems and practical approaches to their solutions. I found it invaluable as a learning aid, and now as a reference.
I'm not sure which MIT book you're referring to, but the canonical text is CLRS. I don't think it really assumes any background besides high school math.
Personally, I found doing TopCoder algorithm competitions over the course of the past few years to be the best way for me to learn common algorithms and put them into practice. Perhaps you should try the same. Whatever you do, I suggest that you spend a lot more hands-on-keyboard time implementing things you learn than head-in-book time, because that's the way to really internalize different techniques.

Which single software quality aspect you always strive to achieve? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
Is it performance, scalabilty, maintainability, usability or what ? What is it that you always strive to achieve while creating a good software or application and why ?
I always prefer maintainability above anything. It's ok if its not otimized or has great user interface - it has to be maintainable. I'm sure each one of us would have something very important to say here. Whole idea is to gather as many as perspectives for improvement in software development.
There's a false premise here: that you want to optimize only one single aspect.
You need to strike a balance, even if that means none of the aspects is perfectly optimised.
For example, your suggestion of striving for maintainability is futile if the usability suffers so much that no-one wants to use your product.
(It could even be interpreted as a little bit selfish, putting your priorities for an easier life over those of the customer.)
Similarly, when I see people striving to get the fastest possible performance out of a component, when there is little customer-need for that... frustrating when they are impacting maintainability, or missing the opportunity to improve security.
It has to do what the customer wants it to do
It doesn't matter how fast, how efficient, how maintainable or how testable a piece of software is if it doesn't do what the customer wants then it's no use to them
A good usability for the end user and some elegance in the code for the fellow developers that might have to work on the same project.
Readability.
If code is readable it's easier to understand! Things like performance optimizations can come later if required after profiling your code.
I think all the other 'goals' you mention can be built on providing you have a readable -and therefore understandable - codebase

Resources