QWERTY vs Dvorak: Controlled studies? [closed] - dvorak

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Has anybody done a properly controlled study on typist speed between QWERTY and Dvorak keyboard layouts? I am curious about whether people actually achieve significant speed improvements, but I've yet to read anything that is non-anecdotal. A typical conversation:
Dvorak: "I switched over two years ago and never looked back! My colleagues hate me! I think I might type faster now."
QWERTY: "My colleague switch over two years ago. I sometimes have to use his computer, and I get very annoyed when I have to switch it back to QWERTY. I still type faster than him."
Dvorak: "QWERTY was designed to slow down typists to prevent jams in obsolete typewriters! We don't use typewriters, so why is our keyboard layout catered to them?"
QWERTY: "The keyboard layout is a well-established convention, much akin to the use of English units of measure in the US. Sure, a transition is possible, but is it possible to recoup the retraining costs in a realistic time frame?"
... and from there, a pointless dialectic ensues in which QWERTY and Dvorak neglect to perform any valid assessments of such minor empirical details as who can achieve better typing speed or has a reduced risk of RSI.
What I'd like to see are the results of words-per-minute tests - for instance, those that force the typist to retype random bits of literature - split among users that rate themselves as light, moderate, or heavy typists, with information on when the QWERTY-Dvorak transition was performed to try to understand the magnitude of QWERTY's historical advantage. Ideally, the study could include pre-transition QWERTY speeds for comparison with Dvorak speeds, and perhaps measurements of Dvorak speeds for various periods of time post-transition.
I should not need to say it, but just in case: It should be obvious that a questionnaire on a topic with as much fervent zealotry behind it as the QWERTY vs Dvorak discussion cannot control for selection bias. The study must perform actual, controlled tests, and is ideally longitudinal.
Anyone know of any such studies?

Related

What is the value of performance in an application, in dollar form? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
This isn't just a question for me, but for anyone wondering how important it is to make things more daft punk like (faster, better, and stronger ;) ). I don't know enough about this topic, even though I try my best to make things work the best. But I am curious as to what a company like google might pay for a better sorting algorithm, or a better search algorithm.
My main reasoning for asking this question is because I have developed, and am working on a method, for a search algorithm, that works in O(8) every time. That's right, not O(8n), but O(8). I'm not sure about the memory, but it is extremely quick. Quick enough to be instant. It would stop the need for splitting up your data across servers.
Anyways back to the question. How much, in money (so people can ACTUALLY understand the value), do/ would companies spend on things like speed, memory usage, and file size?
Distributed Computing Economics written by Jim Gray in 2003 is a good start. See this quote:
From this we conclude that one dollar equates to
= 1 $
≈ 1 GB sent over the WAN
≈ 10 Tops (tera cpu operations)
≈ 8 hours of cpu time
≈ 1 GB disk space
≈ 10 M database accesses
≈ 10 TB of disk bandwidth
≈ 10 TB of LAN bandwidth
Obviously the values have changed since then, but this is very concrete data pointing to the fact that performance counts.
A more recent post related to this subject on High Scalability blog: At Scale Even Little Wins Pay Off Big - Google And Facebook Examples.
It's buried in this hard to read blog post, but there's some very telling tidbits about how real users respond to slow operations in http://blogs.adobe.com/jd/2008/08/factors_affecting_realworld_ad.html
I found the macromedia study he mentions a few years ago, but sadly can't find it now. It described shockingly high falloff for adoption as download times increased.
Takeaway: Users are incredibly impatient. Be fast, or lose.

When a optimization is no longer "Micro-optimization" [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm a team-leader/feature-architect that emerged from a developer position, so I have coding experience and a lot of the things being evolved now a days were implemented by me in the first place. Now to the point: Reviewing some code for the sake of refactoring (and some nostalgia) I've found a bunch of places that could be optimized, so as an exercise I gave myself 2 days to explore and improved a lot of stuff. After running a benchmark I found out that the general module performance had improved about 5%.
So I aproached some colleagues (and the team I run) and presented my changes. I was surprised by the general impression of "micro-optimization". If you do look at every single optimization, then yes, they are micro, but if you look at the big picture...
So my question here is: When is an optimization no longer considered "micro"?
Whether an optimization is micro or not is usually not important. The important stuff is whether it gives you any bang for the buck.
You wrote you spend two whole working days for a 5% performance increase. Did you spend those days wisely? Was those things you fixed the "most slow" part of your application, or at least those most easy to fix performance issues? Did your changes made you reach your performance target (that you didn't do before)? Does 5% performance matter at all in your case? Usually you want something like 100% or 1000% increase if you figure out that you need to improve your performance.
Could you perform your optimizations without disturbing readability and/or maintainability of the code?
Besides, what other costs did those optimizations render? How much regression test were you required to perform? How many new bugs did you create?
I know, this looks more like questions than an answer, but those are the kind of questions that should rule your decision to make an optimization or not.
Personally, I would differentiate between changes that lead to a reduction in algorithmic time or space complexity (from O(N^2) to O(N), for example) and changes that speed up the code or reduce its memory requirements but keep the overall complexities the same. I'd call the latter micro optimizations.
However, keep in mind that while this is a precise definition it should not be the only criterion for deciding whether a change is worth keeping: Reduced code complexity (as in difficulty to understand) is often more important, especially if speed and memory requirements are not a major cause of concern.
Ultimately, the answer will depend on your project: For software running on embedded devices the rules are different than for stuff running on an Hadoop cluster.

I've been programing for six year and my homework in University gets marked down for coding style [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been programming for the last 6 years. I just recently started my first degree in computer science. My work seems to be constantly marked down for different reasons, amongst many of them:
Uncommented code
Writing too long identifier names and methods
Writing too many methods
After working as a programmer for six years for numerous startup companies, and absorbing best practices which include the requirement to write "self explanatory code" I find it very difficult to go back to bad practices.
What can I do?
Self documented code is not synonymous with comments.
I've argued with many senior devs around this point. Code can go a long way in communicating intent but there are some things which simply cannot (and should not) be documented through code.
For example if you have a highly optimised function/method or chunk of code which is heavily coupled to the underlying problem domain and requires very specific knowledge of the business or solution. Comments are needed in these scenarios.
Yes, yes, comments come with there fair share of problems but this doesn't mean they aren't helpful (or mandatory in certain cases).
I can't tell you how many times I've read a colleagues line of code and thought "what the hell?!?" only for them to explain that they needed to do that due to some quirk of some library or browser we were targeting etc.
Comments are a mechanism for a developer to justify a design decision.
As for your other problems, they are subjective. How long is too long? How many is too many?
Point them at Microsoft's guidelines if you are on the MS stack or there will be countless articles for whichever language you're using...
Hope that helps.

Is this defensive programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've always thought defensive programming was evil (and I still do), because typically defensive programming in my experience has always involved some sort of unreasonable sacrifices based on unpredictable outcomes. For example, I've seen a lot of people try to code defensively against their own co-workers. They'll do things "just in case" the code changes in some way later on. They end up sacrificing performance in some way, or they'll resort to some silver bullet for all circumstances.
This specific coding practice, does it count as defensive programming? If not, what would this practice be called?
Wikipedia defines defensive programming as a guard for unpredictable usage of the software, but does not indicate defensive programming strategies for code integrity against other programmers, so I'm not sure if it applies, nor what this is called.
Basically I want to be able to argue with the people that do this and tell them what they are doing is wrong, in a professional way. I want to be able to objectively argue against this because it does more harm than good.
"Overengineering" is wrong.
"Defensive Programming" is Good.
It takes wisdom, experience ... and maybe a standing policy of frequent code reviews ... to tell the difference.
It all depends on the specifics. If you're developing software for other programmers to reuse, it makes sense to do at least a little defensive programming. For instance, you can document requirements about input all you want, but sometimes you need to test that the input actually conforms to the requirements to avoid disastrous behavior (e.g., destroying a data base). This usually involves a (trivial) performance hit.
On the other hand, defensiveness can be way overdone. Perhaps that is what's informing your view. A specific example or two would help distinguish what's going on.

Managing code transitions between developers [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are your best practices for making sure newly hired developers quickly get up to speed with the code? And ensuring developers moving on don't set back ongoing releases.
Some ideas to get started:
Documentation
Use well established frameworks
Training / encourage mentoring
Notice period in contract
From a management perspective, the best (but seemingly seldom-follow) practice is to allow time in the schedule for training, both for the new employee and for the current developer who'll need to train them. There's no free lunch there.
From a people perspective, the best way I've seen for on-boarding new employees is to have them pair program with current developers. This is a good way to introduce them to the team's coding standards and practices while giving them a tour of the code.
If your team is pairing averse, it really helps to have a few current diagrams for how key parts of the system are structured, or how key bits interact. It's been my experience that for programs of moderate complexity (.5m lines of code), the key points can be gotten across with a few documents (which could be a few entity-relation document fragments, and perhaps a few sequence diagrams that capture high-level interactions).
From the code perspective, here's where letting cruft accumulate in the code base comes back to bite you. The best practice is to refactor aggressively as you develop, and follow enough of a coding guideline that the code looks consistent. As a new developer on a team, walking into a code base that resembles a swamp can be rather demoralizing.
Use of a common framework can help if there's a critical mass of developers who'll have had prior experience. If you're in the Java camp, Hibernate and Spring seem to be safe choices from that perspective.
If I had to pick one, I'd go with diagrams that give enough of a rough map of the territory that a new developer can find out where they are, and how the big of code they're looking at fits into the bigger picture.

Resources