copying algorithms and feeling guilt [closed] - algorithm

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
so this might be a weird question, and it might be better to ask a psychologist but i thought since they probably wouldn't know much about programming, i could ask you guys. So i have an assignment and it asks me to solve a problem using an algorithm. So i finish the problem and then usually look online for better algorithms and see how people did theirs. But i was working on a problem that i had solved to about maybe 50%, but then i ran into a wall.
So i looked online and found this great algorithm, i want to use it (i have already implemented it) but i feel a bit jealous and guilty that i couldn't think of a way to solve it and like i have done something wrong. I will obviously cite where i got the algorithm from, so i'm not cheating. How should i view something like this, try to what i learned from the algorithm and try to apply it in the future? Do you guys ever feel like oh i copied it and i was unable to think about it on my own. I have like an ocd about this stuff. Thanks for any help and support

Consider this: if you had to always start from scratch and reinvent everything someone else figured out before you you'd still be sitting in a cave, hunt and gather plants for food - you get the picture.
Someone else invented the computer but you don't feel guilty about that, do you? There are a number of algorithms out there that are very fundamental and you use them even when you don't realize it - searching, sorting, memory management etc.
Copying is progress, it gives you the time to solve the new problems and someone else may end up copying your solutions if they are good.
...
However to get good you have to grunt through the basics and really get them. And blatantly copying won't give any better of a clue how to come up with your own. On top of that copying may be illegal - if a certain algorithm is protected by a patent for example.
My take is to use your better judgement and don't be too shy to copy but make sure you really understand what you're copying and strive to better yourself so that eventually others find you worth copying.

Referring to algorithms from the internet is obviously not cheating. B'coz that is the reason why a noble guy has uploaded it. You should be certain that you would not always be able to think the best algorithm even if you have the right knowledge. Algorithms are like art and it comes the best when you are in the right state of mind... Hence don't worry and enjoy programming.

Related

What are Some Ways or Common Methods to Tell if an Unsupervised Learning Algorithm is Correct [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know that part of the beauty, mystery, and complexity of unsupervised learning is that it extracts information out of lots of data that humans can figure out. However, are there any ways of knowing if the algorithm is right. For example, say it is looking at stock trends and it makes some deduction about a certain stock. Without actually seeing how it plays out, is there any way of knowing that it is right? The data it trained off of could be wrong or, more importantly, your algorithm could've just drawn the wrong conclusions. Obviously there are mathematical measures such as loss, but currently do we just have to live with the fact that the algorithm may be wrong? What are some of the ways that we can measure how correct an unsupervised learning algorithm is (or is "correct" just an ambitious term)?
In short:
Without actually seeing how it plays out, is there any way of knowing that it is right?
No. If there was, then you wouldn't even need your original algorithm. Just make random predictions and use your oracle to tell if they're right or not.
The data it trained off of could be wrong
ML algorithms learn based on data. If it's wrong, they will learn wrong. If you were only ever told that 1+1=3, would you have any reason to question it?
more importantly, your algorithm could've just drawn the wrong conclusions
No conclusion is wrong if it is supported by the data. It might not be the one you're after (see https://www.jefftk.com/p/detecting-tanks), in which case you should get data that better describes what you're after.
but currently do we just have to live with the fact that the algorithm may be wrong?
Yes, and we probably always will. Are humans ever always right about something? You could be wrong about very basic things under the right circumstances. And we're much smarter than current AIs.
What are some of the ways that we can measure how correct an unsupervised learning algorithm is (or is "correct" just an ambitious term)?
It's very ambitious. You could check the results manually, if you're good enough at the problem you're trying to solve. If you want to classify images into dogs and cats, that's probably simple enough for you as a human to judge. Apply the algorithm and check some of the predictions manually to get an idea of how well it did.
If you want to have something that plays Go really well, challenge the world champion.
It depends on the problem.

How to phrase a request for feedback / support from management? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm in my first development job out of college and have been handed a (solo) project that is completely outside the range of my skills/experience both in terms of the technologies being used and the sheer scope of the thing.
I've spent the last 6 months or so basically completely retraining myself and then starting to do the thing, and although I did very well at college and I think that I'm on track for delivery, I've had zero feedback on what I've been doing and I'm suddenly starting to feel very much out of my depth.
My direct supervisor, while a nice guy and I think a competent coder, doesn't have the best communication skills and basically told me to "read a book" when I asked him for a bit of guidance, which is not really what I was hoping for!
Am I just being unrealistic about the amount of support I can expect as a junior developer? It seems to me that ignoring the issue and ploughing ahead runs the risk of a failed project which is to no-ones benefit. I could take my request for guidance a step higher to the head of development, but I don't want to sound like I'm saying I can't do the job nor do I want to make my supervisor look bad.
Can anyone suggest a good approach for saying "help!" without making myself or my supervisor look bad?
This is a great question, and I think a fairly common situation. Basically, I think what you're asking for is guidance on how to communicate with your boss, and the other people in your organization.
This might be a good time to look into the scrum framework, and take from it what seems applicable to your environment.
In particular, you mention that you might be in over your head. Or, there is an (implicit) expectation that you'll need to finish this project "tomorrow," when you really don't know how long it will take.
I suggest starting with a list. Write down everything you need to do. Include non-coding activities, like "research technology X for doing Y," and give each task a basic time estimate like "1" for short, "2" for medium, "3" for long. Then put the things in an order that you think makes sense.
Then meet with your boss, once a week, for like 20 minutes, to discuss what you did, and what you're going to do next week. Out of this discussion, you'll both see what's going on, and adjust expectations (and the list) accordingly. When conflicts of expectation come up, talk it out.
Regarding the amount of support to expect as a junior developer, this really depends on your organization, and your supervisor's opinion. As software engineering is still a relatively young profession, there isn't much in the way of industry-standard mentoring programs.
I suggest trying the list + meeting thing for a couple months, and observe how your opinion of the support situation changes. Then, go to a large conference as soon as possible; spend the money if you need to. You'll see who is struggling with similar situations, and also who is not, and you'll create your own, more-informed model of "how the industry is supposed to work."
Regarding a good approach to communicating, I (seriously) suggest The Seven Principles for Making Marriage Work by John Gottman, which has a lot of examples of what works and doesn't work when communicating with people.

How do people solve programming competitions so fast? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I hope this is not a vague/broad/subjective question. If it is, please close it.
Anyway, at several programming competitions (like Google's Code Jam, Facebook's Hacker Cup and so on), by the time I've successfully understood a problem and have an inkling of how to approach it, I see that half the questions are already solved by many people.
My question is, how do these people get so good? Is it pure genius? Is it experience? Is it the ability to think really fast? How would you suggest I improve my skills? I would say I'm a competent programmer. I can eventually solve some of those questions.
Additionally, whenever I inspect the code of winners, I see a LOT of macros being used. This implies to me that they sort of have a template (like #define for loops to some abbreviated version) which they use to program faster. Does this make a significant difference?
The thing is, you're competing against people who've spent massive amounts of time mastering their skill to compete in these competitions. You're unlikely to catch up any time soon, but...
How do these people get so good?
Have the theoretical knowledge to solve the problems and practice, practice, practice.
Is it pure genius?
It can be, but practice can to a reasonable extent make up for it.
Is it experience?
Yes.
Is it the ability to think really fast?
Not really. Practice allows you to approach the problem correctly and skip insignificant details in the problem statement.
How would you suggest I improve my skills?
Get the theoretical knowledge and practice.
Do macros make a significant difference?
It may cut 10% off of your time, but probably not much more.
Statistically speaking, any programming competition with a large enough audience will attract super-talents who can churn out nice and elegant code at super-speed. It's like running the marathon. Running it in 4 hours is really good, even if the world record is around 2 hours. Don't worry about it.
Focus on code quality and elegance instead, instead of being able to churn out code at super-speed. Practise, have fun, and don't look too much at how fast other people are working.

What's a good analogy to explain the concept of wicked problems? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Most developers understand the concept of wicked problems. What's a good analogy to use when explaining this concept to project managers?
If you want an analogy, I would go with the timeline of NASA. Still technical based, but you do not need any coding skills to understand the difficulty involved. I'm also using the Coding Horror definition of a wicked problemm in that:
Horst Rittel and Melvin Webber defined
a "wicked" problem as one that could
be clearly defined only by solving it,
or by solving part of it. This paradox
implies, essentially, that you have to
"solve" the problem once in order to
clearly define it and then solve it
again to create a solution that works.
When NASA started, they were tasked with getting a man on the moon. I'm sure at the time they had ideas of how they were going to accomplish the task, but there was no way they could spec out the first moon mission at the start. They had to develop rockets and find out all the catastrophic things that could go wrong. They had to get an orbiter flying around the Earth and then bring the astronaut back home safely. Eventually they got to the point of getting to the moon, but there was still the issue of getting home.
I hope this seems like a non-programming wicked problem to your project manager. If not, I agree with Glomek. You are doomed.
Every changing requirements lead to an impossible to manage design. Just send him here: Winchester Mystery House. The house is filled with staircases to nowhere and doors that open to brick walls. It was built exactly as spec'd, but isn't really what you'd call usable.
Of course up here in New England a "wicked problem" is one that requires a wicked good engineer to come up with a wicked clever solution :)
there's the time-honored "trying to hit a moving target" analogy
which you could step up to wicked level as
trying to hit a moving taget that changes shape,
wears disguises, hides in shadows, recruits minions,
and shoots back
Try getting them to read the article with "I wanted to know your thoughts on this article..."
Really, your project managers should know this stuff though.
If your project manager has no programming experience, then you are doomed and should find a new place to work.
If your project manager has no programming experience, and is not willing to leave architectural decisions to someone who has some programming experience, then you are double doomed and you very desperately need to find a new place to work.

How do you move from the Proof of Concept phase to working on a production-ready solution? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on a project that's been accepted as a proof of concept and is now on the schedule as an actual production project. I'm curious how others approach this transition.
I've heard from various sources that when a project starts as a proof of concept it's often a good idea to trash all of the code written during that rapidly-evolving phase and essentially to start over with a clean slate, relying on what you learned from the conceptual phase but without working to clean up the potentially messy code that you wrote the first time around. Kind of the programming version of "throw away the first copy of that angry email you're about to send and start all over" theory.
I've done it this was in the past and I've also refactored the conceptual code to use in production, but since I'm in the transition phase for a new project I wanted to get an idea how others do this. Obviously a lot depends on the project itself, and on the conceptual code (if what you generated works but won't scale for example, it's probably best to start afresh, but if you have a very compressed timeline for the project you might be forced to build on what you've already written).
That said, if all things were equal what would you all choose as an approach?
As you already kind of hinted at, the answer is, "It Depends"
Starting over is good because you can help trim out the stuff that was added while you were initially working out the kinks but isn't really needed.
It also gives you a chance to give more consideration to how you want the architecture to be -- without already being dependent on how the proof-of-concept was written...
In practice, though, unless you're in the business of selling the software to the outside world, building upon the prototype is pretty commonplace. Just don't get into the habit of thinking "I'll fix it later" if you run into some code that smells or seems like it could be done in a better way...
Refactor the existing code into the solution.
For me it would depend on how sloppy my POC was. If it is something I would be ashamed to pass onto another developer, I would rewrite it. Otherwise, just go with what you got.
If the code works, use it. Spend a little bit of time refactoring the messiest parts in order to ease future maintenance. But don't fall into the trap of building a new system from scratch.
Throw away everything from the proof of concept except for the lessons learned, and, possibly, some minor code fragments such as calculations etc.
Proof of concept applications should never be more than just the bare minimum to see if the technology in question will work and to start testing some of the boundary conditions.
Once done you are free to redesign the application with your new found knowledge.

Resources