Want to Write a Program that Will Switch Networks (Wireless) - macos

I have two units on my property, one residence, other is now an office. I have an Airport Extreme in each place and they are bridged. Sadly, the basic software in the devices (iPads, phone, laptop) do a truly awful job of deciding when to change networks. What really sucks about this is I do some calling sometimes, and the call will drop even though the other network is there and has a great signal.
I posted on the Apple Discussion Forums, and a dude gave me a good answer, but it sounds like their reasoning is still not going to produce a good result.
I want to write a program that does this:
The minute it sees a drop in the signal, it looks for the other network.
Then it starts to monitor the ratio of signal between the two.
Soon as it favors one over the other by more than a marginal amount, it changes.
I hope this is not seen as an admin question. This is a programming question. Has anyone any experience with the frameworks that Apple's developer tools would have for this?
Thanks.

Related

When software problems reported are not really software problems

Apologies if this has already been covered or you think it really belongs on wiki.
I am a software developer at a company that manufactures microarray printing machines for the biosciences industry. I am primarily involved in interfacing with various bits of hardware (pneumatics, hydraulics, stepper motors, sensors etc) via GUI development in C++ to aspirate and print samples onto microarray slides.
On joining the company I noticed that whenever there was a hardware-related problem this would cause the whole setup to freeze, with nobody being any the wiser as to what the specific problem was - hardware / software / misuse etc. Since then I have improved things somewhat by introducing software timeouts and exception handling to better identify and deal with any hardware-related problems that arise eg PLC commands not successfully completed, inappropriate FPGA response commands, and various other deadlock type conditions etc. In addition, the software will now log a summary of the specific problem, inform the user and exit the thread gracefully. This software is not embedded, just interfacing using serial ports.
In spite of what has been achieved, non-software guys still do not fully appreciate that in these cases, the 'software' problem they are reporting to me is not really a software problem, rather the software is reporting a problem, but not causing it. Don't get me wrong, there is nothing I enjoy more than to come down on software bugs like a ton of bricks, and looking at ways of improving robustness in any way. I know the system well enough now that I almost have a sixth sense for these things.
No matter how many times I try to explain this, nothing really penetrates. They still report what are essentially hardware problems (which eventually get fixed) as software ones.
I would like to hear from any others that have endured similar finger-pointing experiences and what methods they used to deal with them.
UPDATE
Some great responses here that pretty much sing from the same hymn sheet: be more descriptive. I guess identifying the command and bombing out cleanly when the hardware fails was the first stage, but was still not quite enough. The next stage will be to map what are to the layman fairly meaningless PLC commands to something more suggestive. "PLC Command M71 timeout" becomes "Failure to initialize syringe system. Check adequate vacuum reached" and so on...
Perhaps when reporting the problem either as a message to the user or an entry in the log file you need to make it explicitly clear that it's the hardware that's at fault:
"Stepper motor not responding".
Unfortunately, because it's the software that people see and interact with they assume that the software is all that there is.
You could try labeling the error messages as "HARDWARE PROBLEM". Might get your point across.
There's no such thing as non-software problem in a system. Software is the boss, and the boss cannot blame failure for the tools.
If underlying hardware is malfunctioning, it should report to the user what exactly went wrong with which component. If it didn't, it is a software problem.
For example, TCP disconnection means it have to reconnect. If it's an FPGA response, it should tell exactly what were the inputs and the outputs to the user, and who is to blame. If not, this is a software problem.
"If what you're doing isn't working, stop doing it and try something else"
As pointed out in other comments, it's a communcation and to a lesser extent, perception problem. People will blame what they don't understand FAR more easily to make themselves feel like a victim. A motor could be sparking, throwing fire and explode from someone grossly overloading a feeder (with EVERY warning not to plastered all over it) -- but if that software stops responding, guess what caused the problem?
Since giving every one of your users a EE and CS class or 10 is completely out of the question, fall back on good ole communication. The basis of which is 4 things (mostly my opinion) in no particular order - What you observe, what you feel, what you think and what should be done. So with this idea, I'll put into practice by giving this response.
It seems like your users like to blame software when some of the underlying hardware is the key issue (observe). Trying to explain this with the users about this is impractical and a waste of time, that's not their job and most of them won't care (feel). What you may want to try is talking with the engineering team about the parts they're using and look into things that work better with software in general. Maybe there's some constraints of the inputs that were never considered? (think) Changing out the hardware or just a better understanding of it might be the real answer as well as more targeted errors and feedback to those users (done).
I agree with the other posters, but I wanted to add another perspective: It could be worse. They could be attempting to solve the hardware problems for days or weeks, and then find out later, when everyone is under the gun and has been going crazy about it not getting fixed, that they were addressing the wrong problem and it was, in fact, a software problem. So count your blessings. If they always classify it as a software problem, at least you know about it. Only then can you troubleshoot, maybe put in additional problem-solving or problem-identifying code, and make the system a tiny bit better.
Also, this is pretty much the same as every software developer everywhere has ever faced. Except usually it is the software versus the user, not the software versus the hardware. And in that case, it appears there is no known solution. Lots of ways to address the problem, but no way to fix it. Thus the ever-growing list of acronyms describing how to blame the user without being rude: ID-ten-T error, PICNIC, PEBKAC, etc.
Who is it who's reporting the problems?
If it's the end users, I think this is a non-issue. They just know that what they're trying to do is not working. It's not the user's responsibility to diagnose the problem. All they know is, "I tried to do X, Y should have happened, but instead Z happened." Everything beyond that is your problem.
If the hardware folks are insisting that the problem is in the software and the software folks are insisting that the problem is in the hardware, then you need to enhance the software to diagnose errors more precisely, as ChrisF and others have noted.
If the higher-ups are blaming the software group for problems that are the responsibility of the hardware group and you're sick of taking the blame for other people's mistakes, okay, I understand that. Again, as the software guy, you have the power to create more precise error messages. If you can explicitly say, "Stepper motor not responding" or whatever, then you have the "moral authority" to insist that someone run diagnostics on the stepper motor. Just saying, "I'm pretty sure it's a hardware problem" isn't going to win an argument.
Test-oriented development (not necessary means 'test-driven') is want you should resourced to.
Basically, every sub-systems should have a reasonably thorough set of unit tests to identify problem before integration. Every time a problem occurs test the hardware so you can know for sure (or almost sure) that it is the hardware problem. This means that hardware must be designed in the way that it can be thoroughly tested.
I was a integration head for my college robot team and this tactic helps a lot.
Hope this helps.
First, make sure your users are more likely to read and understand your error messages. Displaying "FPGA command GS_WIDGIT_FROB returned invalid response 0xFF45001C. Shutting down controller id 576D. (Error 1Xf)" might be great for you. But, the user is likely to hit "Ok" without reading it. Even if they do read it, it tells them no useful information. Either way, you're getting a phone call. Display "Widgit Frobber requires maintenance", but still log all the heavy details somewhere, and you're likely to get less calls.
Second, you know it's a hardware problem so do something about it! Have your software email hardware support, or whatever it takes to get the problem fixed. If the user is forced to decide what action to take to fix it, you can bet they'll get it wrong at least some of the time. If the user sees "Widgit Frobber requires maintenance. Hardware support has been notified (ticket #234)" they know that they don't have to do a thing.

What's the most irrational user behaviour you have witnessed? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
While novice software designers expect their users to behave rationally, it's far from being the case ; I've seen many times the user perception being totally disconnected from reality, or it's feedback obviously irrational.
I think we are the one who should adapt, not the other way around.
There's only one way that I know of to achieve this : listen to users, especially about what they don't like in software they use.
If there's one thing I've learned so far ; they often complain about things one wouldn't expect
What unexpected things did you learn from your users?
A few years ago, hospitals (at least French hospitals) were run using old win 3.11 software’s. Every single task was tedious; moving someone from one room to another would take 5 minutes to an expert user
A friend of mine was working on selling up-to-date software to those people. The same simple task would take 30s to a total beginner.
While most of the users were very happy with the new software, a handful were complaining, which wasn’t a surprise (there’s always a handful of users complaining). What was more unexpected was their reason: the software was damn slow. “The same simple task was instantaneous, now it takes ages to achieve. Give me my old software back”, they would say.
My friend decided to meet them, and asked them for a live demo of the slowness they were complaining about.
“Look, said the user, with my old software : I input the first name, enter, the name, enter, the admission number, enter, the old room number,[…insert 5 minutes here…] the new room number enter … and it’s done….. See… Everything is instantaneous”
“Now, look at your software. I do a drag and drop, as you taught me. And I wait, I wait… look, it’s done..I’ve waited for almost 30s….”
That’s a real world example. It really happened. I’m pretty sure that if the software had been modified to ask for useless information that it would have discarded afterwards during the 30s period, this user would have had a far better feeling with the new software
If you think about it there's no such thing as irrational user behaviour, there's just a mismatch between your expectations and theirs. The only way to close that is through dialogue. That doesn't necessarily mean going and doing usability studies, often the right dialogue is for them to read the help where the discrepancy is easily dealt with.
The only wrong thing to do is to not listen to what they are saying - or to listen and not really hear them (see the post on here about IE on the Mac - it's the height of arrogance). Of course you are going to get some people who just don't like change and will whinge about anything, but in general if a user will take the time to point out something in your software which bugs them, then you should listen. You may choose to ignore them, but if you listen right you may just as easily uncover a real gem.
I don't believe your users or customers will often innovate for you, but I strongly believe that they are the key to your software being usable, and usability leads directly to success. So to characterise them as irrational probably doesn't serve your best purposes - or theirs. Better to take them seriously to start with and filter out what you consider not to be good feedback.
Developing for a hand held unit many years back, I got contacted by a user who complained that their unit kept on turning off immediately after power on. It turned out to be a bug; the startup message ended with the line "Press any key to continue". It should have said "Press any key, except the big red key marked power, to continue".
One thing I have learned over the years is that time spent with end-users on requirements analysis prior to going anywhere near design is hugely important, as is understanding the culture and educational background of the users. Designing computer systems that look and work like existing manual systems is a good start, as is understanding the workflow. Another hand held van sales delivery system I was involved was specced to look for on-screen customer signatures on delivery, and this was necessary to complete the transaction. It turned out that most of the deliveries actually occurred early morning before anyone was there to sign for them, so the perceived workflow didn't gel with reality at all. The client IT staff didn't actually know this, nor did the business analyst. If you design systems without input from actual end users you do so at your peril.
In my previous job, I was designing a huge trading software for a huge bank.
The software would typically take around 5 minutes to launch.
Of course, the users were complaining a lot about the startup time, especially when the software was crashing during the day, which was happening from time to time.
From the day we added a detailed progress bar (progressing quite regularly, with an indicator of the number of remaining items), the complaints almost stopped.
Typical users would say "I used to take ages to load, but now, it's quite fast"
The next step for us was to display the user interface before the data is loaded instead of after (which makes more sense for an IT point of view)
This time, the modification resulted in a slight performance drop (from 5mn to 5"30), due to the cost of impacting the UI during the loading time.
From a user perspective, the software was much faster this way !!
I was once working on a cms for images. The admin would basically browse though pages of user-made images, and check the ones he wanted to publish. I wrote a nice manual on how the system works, but since everybody knows people don't read manuals, i put some guides on the page telling what to do (in this case, something like: "Check the box for every image you want to publish").
It wasn't long before some guy came pull my sleeve: "There's a bug in your program. It actually tosses the images i don't select, and not the ones i select".
The problem was solved by asking him to read aloud the text on the page.
While novice software designers expect
their users to behave rationally, it's
far from being the case ; I've seen
many times the user perception being
totally disconnected from reality, or
it's feedback obviously irrational.
I think we are the one who should
adapt, not the other way around.
Are you saying we should adapt to irrational behaviour? Software development is already irrational enough (dynamic languages, test driven development, ...), and you expect us to unilaterally bend over backwards to accommodate some distorted expectations?
A few years ago, I designed a small application which was mainly aimed at helping users to input complex data in a database.
Their old method was to input everything into an excel sheet (without validation of any kind), and then to use a vba macro.
My new program added validation, and was able to auto-fill almost half of the data they previously manually entered.
I expected to be a success... which it wasn't ... at all:)
"It's just impossible to use", they said...
I had tested it, asked my mother to test it... my software was fine...
In fact, those users were so used with inputting repetitive data that they used only the keyboard, not the mouse. And of course, I hadn't thought of managing the tab order correctly, so the cursor was just jumping all over the place each and every time they hit "tab", thus the "impossible to use" comment !

Meta-composition during music performances [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
A couple of weeks ago, my piano teacher and I were bouncing ideas off of each other concerning meta-composing music software. The idea was this:
There is a system taking midi input from a bunch of instruments, and pushes output to the speakers and lights. The software running on this system analyzes the midi data it's getting, and determines which sounds to use, based on triggers set up by the composer (when I play an F7 chord 3 times within 2 seconds, switch from the harpsichord sound to the piano sound), pedals, or actual real-time analysis of the music. It would control the lights based on the performance and sounds of the instruments in a similar fashion - the musician would only have to vaguely specify what they wanted - and real time analysis of their playing would do the rest. On the fly procedurally generated music could play along with the musician as well. Essentially, the software would play along with the performer, with one guiding the other. I imagine that it would take some practice to get use to such a system, but that it could have quite incredible results.
I'm a big fan of improv jazz. One characteristic of improv that is lacking from other art forms is the temporalness of it. A painting can be appreciated 10 or 1000 years after it has been painted, but music (especially extemporized music) is about the performance as it is the creation. I think that the software that I described would add a great deal to the performance, as with it, as playing the exact same piece would result in a completely different show each time.
So, now for the questions.
Am I crazy?
Does software to do any or all of this exist yet? I've done some research and haven't turned up anything. The key to this system is that it is running during the performance.
Were I to write something like this, would a scripting language such as Python be fast enough to do the computations that I need? Presumably it'd be running on a fairly quick system, and could take advantage of the 2^n core processors Intel keeps releasing.
Can any of you share your experience and advice concerning interfacing with musical instruments and lights and the like?
Have any ideas or suggestions? Cold and harsh criticism?
Thanks for your time in reading this, and for any and all advice!
(And sorry for the joke in the tags, I couldn't resist.)
People have used Max MSP to do this kind of thing with Midi and creating video accompaniment, or just Midi accompaniment. It's a completely domain specific app, that probably was inspired by small talk or something, which barely any real programmer could love, but musician-programmers do.
Despite the text on the site I just linked to, and the fact that 'everyone' uses the commercial version, it wasn't always a commercial product. Ircam eventually released it's own lineage. It's called jMax. PureData, mentioned in another post here is another rewrite of that lineage.
There's also CSound; which wasn't meant to be real-time, but is likely able to be pretty real-time now that you have a decent computer compared to where CSound started.
Some people have also hacked Macromedia Director extensions to allow for doing midi stuff in Lingo... That's very outdated, and hence some of them have moved to more modern Adobe environments.
Look at PureData. It can do extensive midi analysis and folks use it for performance.
Indeed, here's a video that flashes past a puredata screen. It shows someone interacting with a rather complex instrument using PD.
Also, look at CSounds.
I have used PyAudio quite extensively for dealing with raw audio inputs, and found it to be very unpythonic, acting much more like a very thin wrapper over C code. However, if you're dealing with midi, rather then raw waveforms, then your tasks are quite a bit simpler, and python should be quite fast enough, unless you play at 10000 beats per minute :)
Some of the issues: detecting simultaneity, harmonic (formal - i.e., chord structure) analysis.
This is also an 80/20 problem that if you restrict the chord progressions allowed, then it becomes quite a bit simpler. After all, what does "playing along" mean, anyway, right?
(Also, at electronic music conf's I've been too, there are lots of people doing various real-time accompaniment experiments based on input sound and movement). Good luck!
You might also look at ChucK and SuperCollider, the two most popular 'real' realtime music programming languages.
Also, you might be surprised at how much you can accomplish with Ableton Live racks.
(and it's CSound. No 's' at the end)
see also:
Keykit
Arx
I have no idea if the second one is actually real or worth looking at. Keykit, however, is.
You might contact Gary Lee Nelson in the TIMARA department at Oberlin. 20 years ago I did a project that auto-generated the rhythm section for 12 bar blues and I recall him describing a tool that he knew of that did essentially what you're describing.
You might be interested in GenJam
The answer to your question is no - you're not crazy.
Similar systems exist, but your description is pretty
vague to begin with so it's not much of a spec to judge against.
I suggest you start writing a prototype and see how it does.
Something extremely small and simple.
Existing systems be damned.
I'm using c++ on win32 api (no mfc).
Started writing my sequencer back on the Amiga500.
It doesn't do lights, but there's plenty to do in just music.
Good luck to you.
It's an EXTREMELY fun project.
I'd say -don't- pattern your project on how other projects work.
Because, if you ask me, they don't work so great ;)
And the fun is being able to do something different.

Understanding Dijkstra's Mozart programming style

I came across this article about programming styles, seen by Edsger Dijsktra. To quickly paraphrase, the main difference is Mozart, when the analogy is made to programming, fully understood (debatable) the problem before writing anything, while Beethoven made his decisions as he wrote the notes out on paper, creating many revisions along the way. With Mozart programming, version 1.0 would be the only version for software that should aim to work with no errors and maximum efficiency. Also, Dijkstra says software not at that level of refinement and stability should not be released to the public.
Based on his views, two questions. Is Mozart programming even possible? Would the software we write today really benefit if we adopted the Mozart style instead?
My thoughts. It seems, to address the increasing complexity of software, we've moved on from this method to things like agile development, public beta testing, and constant revisions, methods that define web development, where speed matters most. But when I think of all the revisions web software can go through, especially during maintenance, when often patches are applied over patches, to then be refined through a tedious refactoring process—the Mozart way seems very attractive. It would at least lessen those annoying software updates, e.g. Digsby, Windows, iTunes, etc., many the result of unforeseen vulnerabilities that require a new and immediate release.
Edit: Refer to the response below for a more accurate explanation of Dijsktra's views.
The Mozart programming style is a complete myth (everybody has to edit and modify their initial efforts), and although "Mozart" is essentially a metaphor in this example, it's worth noting that Mozart was substantially a myth himself.
Mozart was a supposed magical child prodigy who composed his first sonata at 4 (he was actually 6, and it sucked - you won't ever hear it performed anywhere). It's rarely mentioned, of course, that his father was considered Europe's greatest music teacher, and that he forced all of his children to practice playing and composition for hours each day as soon as they could pick up an instrument or a pen.
Mozart himself was careful to perpetuate the illusion that his music emerged whole from his mind by destroying most of his drafts, although enough survive to show that he was an editor like everyone else. Beethoven was just more honest about the process (maybe because he was deaf and couldn't tell if anyone was sneaking up on him anyway).
I won't even mention the theory that Mozart got his melodies from listening to songbirds. Or the fact that he created a system that used dice to randomly generate music (which is actually pretty cool, but might also explain how much of Mozart's music appeared to come from nowhere).
The moral of the story is: don't believe the hype. Programming is work, followed by more work to fix the mistakes you made the first time around, followed by more work to fix the mistakes you made the second time around, and so on and so forth until you die.
It doesn't scale.
I can figure out a line of code in my head, a routine, and even a small program. But a medium program? There are probably some guys that can do it, but how many, and how much do they cost? And should they really write the next payroll program? That's like wasting Mozart on muzak.
Now, try to imagine a team of Mozarts. Just for a few seconds.
Still it is a powerful instrument. If you can figure out a whole line in your head, do it. If you can figure out a small routine with all its funny cases, do it.
On the surface, it avoids going back to the drawing board because you didn't think of one edge case that requires a completely different interface altogether.
The deeper meaning (head fake?) can be explained by learning another human language. For a long time you thinking which words represent your thoughts, and how to order them into a valid sentence - that transcription costs a lot of foreground cycles.
One day you will notice the liberating feeling that you just talk. It may feel like "thinking in a foregin language", or as if "the words come naturally". You will sometimes stumble, looking for a particular word or idiom, but most of the time translation runs in the vast ressources of the "subconcious CPU".
The "high goal" is developing a mental model of the solution that is (mostly) independent of the implementation language, to separate solution of a problem from transcribing the problem. Transcription is easy, repetetive and easily trained, and abstract solutions can be reused.
I have no idea how this could be taught, but "figuring out as much as possible before you start to write it" sounds like good programming practice towards that goal.
A classic story from Usenet, about a true programming Mozart.
Real Programmers write in Fortran.
Maybe they do now, in this decadent
era of Lite beer, hand calculators and
"user-friendly" software but back in
the Good Old Days, when the term
"software" sounded funny and Real
Computers were made out of drums and
vacuum tubes, Real Programmers wrote
in machine code. Not Fortran. Not
RATFOR. Not, even, assembly language.
Machine Code. Raw, unadorned,
inscrutable hexadecimal numbers.
Directly.
Lest a whole new generation of
programmers grow up in ignorance of
this glorious past, I feel duty-bound
to describe, as best I can through the
generation gap, how a Real Programmer
wrote code. I'll call him Mel, because
that was his name.
I first met Mel when I went to work
for Royal McBee Computer Corp., a
now-defunct subsidiary of the
typewriter company. The firm
manufactured the LGP-30, a small,
cheap (by the standards of the day)
drum-memory computer, and had just
started to manufacture the RPC-4000, a
much-improved, bigger, better, faster
-- drum-memory computer. Cores cost too much, and weren't here to stay,
anyway. (That's why you haven't heard
of the company, or the computer.)
I had been hired to write a Fortran
compiler for this new marvel and Mel
was my guide to its wonders. Mel
didn't approve of compilers.
"If a program can't rewrite its own
code," he asked, "what good is it?"
Mel had written, in hexadecimal, the
most popular computer program the
company owned. It ran on the LGP-30
and played blackjack with potential
customers at computer shows. Its
effect was always dramatic. The LGP-30
booth was packed at every show, and
the IBM salesmen stood around talking
to each other. Whether or not this
actually sold computers was a question
we never discussed.
Mel's job was to re-write the
blackjack program for the RPC-4000.
(Port? What does that mean?) The new
computer had a one-plus-one addressing
scheme, in which each machine
instruction, in addition to the
operation code and the address of the
needed operand, had a second address
that indicated where, on the revolving
drum, the next instruction was
located. In modern parlance, every
single instruction was followed by a
GO TO! Put that in Pascal's pipe and
smoke it.
Mel loved the RPC-4000 because he
could optimize his code: that is,
locate instructions on the drum so
that just as one finished its job, the
next would be just arriving at the
"read head" and available for
immediate execution. There was a
program to do that job, an "optimizing
assembler", but Mel refused to use it.
"You never know where it's going to
put things", he explained, "so you'd
have to use separate constants".
It was a long time before I understood
that remark. Since Mel knew the
numerical value of every operation
code, and assigned his own drum
addresses, every instruction he wrote
could also be considered a numerical
constant. He could pick up an earlier
"add" instruction, say, and multiply
by it, if it had the right numeric
value. His code was not easy for
someone else to modify.
I compared Mel's hand-optimized
programs with the same code massaged
by the optimizing assembler program,
and Mel's always ran faster. That was
because the "top-down" method of
program design hadn't been invented
yet, and Mel wouldn't have used it
anyway. He wrote the innermost parts
of his program loops first, so they
would get first choice of the optimum
address locations on the drum. The
optimizing assembler wasn't smart
enough to do it that way.
Mel never wrote time-delay loops,
either, even when the balky
Flexowriter required a delay between
output characters to work right. He
just located instructions on the drum
so each successive one was just past
the read head when it was needed; the
drum had to execute another complete
revolution to find the next
instruction. He coined an
unforgettable term for this procedure.
Although "optimum" is an absolute
term, like "unique", it became common
verbal practice to make it relative:
"not quite optimum" or "less optimum"
or "not very optimum". Mel called the
maximum time-delay locations the "most
pessimum".
After he finished the blackjack
program and got it to run, ("Even the
initializer is optimized", he said
proudly) he got a Change Request from
the sales department. The program used
an elegant (optimized) random number
generator to shuffle the "cards" and
deal from the "deck", and some of the
salesmen felt it was too fair, since
sometimes the customers lost. They
wanted Mel to modify the program so,
at the setting of a sense switch on
the console, they could change the
odds and let the customer win.
Mel balked. He felt this was patently
dishonest, which it was, and that it
impinged on his personal integrity as
a programmer, which it did, so he
refused to do it. The Head Salesman
talked to Mel, as did the Big Boss
and, at the boss's urging, a few
Fellow Programmers. Mel finally gave
in and wrote the code, but he got the
test backwards, and, when the sense
switch was turned on, the program
would cheat, winning every time. Mel
was delighted with this, claiming his
subconscious was uncontrollably
ethical, and adamantly refused to fix
it.
After Mel had left the company for
greener pa$ture$, the Big Boss asked
me to look at the code and see if I
could find the test and reverse it.
Somewhat reluctantly, I agreed to
look. Tracking Mel's code was a real
adventure.
I have often felt that programming is
an art form, whose real value can only
be appreciated by another versed in
the same arcane art; there are lovely
gems and brilliant coups hidden from
human view and admiration, sometimes
forever, by the very nature of the
process. You can learn a lot about an
individual just by reading through his
code, even in hexadecimal. Mel was, I
think, an unsung genius.
Perhaps my greatest shock came when I
found an innocent loop that had no
test in it. No test. None. Common
sense said it had to be a closed loop,
where the program would circle,
forever, endlessly. Program control
passed right through it, however, and
safely out the other side. It took me
two weeks to figure it out.
The RPC-4000 computer had a really
modern facility called an index
register. It allowed the programmer to
write a program loop that used an
indexed instruction inside; each time
through, the number in the index
register was added to the address of
that instruction, so it would refer to
the next datum in a series. He had
only to increment the index register
each time through. Mel never used it.
Instead, he would pull the instruction
into a machine register, add one to
its address, and store it back. He
would then execute the modified
instruction right from the register.
The loop was written so this
additional execution time was taken
into account -- just as this
instruction finished, the next one was
right under the drum's read head,
ready to go. But the loop had no test
in it.
The vital clue came when I noticed the
index register bit, the bit that lay
between the address and the operation
code in the instruction word, was
turned on-- yet Mel never used the
index register, leaving it zero all
the time. When the light went on it
nearly blinded me.
He had located the data he was working
on near the top of memory -- the
largest locations the instructions
could address -- so, after the last
datum was handled, incrementing the
instruction address would make it
overflow. The carry would add one to
the operation code, changing it to the
next one in the instruction set: a
jump instruction. Sure enough, the
next program instruction was in
address location zero, and the program
went happily on its way.
I haven't kept in touch with Mel, so I
don't know if he ever gave in to the
flood of change that has washed over
programming techniques since those
long-gone days. I like to think he
didn't. In any event, I was impressed
enough that I quit looking for the
offending test, telling the Big Boss I
couldn't find it. He didn't seem
surprised.
When I left the company, the blackjack
program would still cheat if you
turned on the right sense switch, and
I think that's how it should be. I
didn't feel comfortable hacking up the
code of a Real Programmer.
Edsger Dijkstra discusses his views on Mozart vs Beethoven programming in this YouTube video entitled "Discipline in Thought".
People in this thread have pretty much discussed how Dikstra's views are impractical. I'm going to try and defend him some.
Dijkstra is against companies
essentially "testing" their software
on their customers. Releasing
version 1.0 and then immediately
patch 1.1. He felt that the program
should be polished to a degree that
"hotfix" patches are borderline
unethical.
He did not think that software should be written in one fell swoop or that changes would never need to be made. He often discusses his design ideals, one of them being modularity and ease of change. He often thought that individual algorithms should be written in this way however, after you have completely understood the problem. That was part of his discipline.
He found after all his extensive experience with programmers, that programmers aren't happy unless they are pushing the limits of their knowledge. He said that programmers didn't want to program something they completely and 100% understood because there was no challenge in it. Programmers always wanted to be on the brink of their knowledge. While he understood why programmers are like that he stated that it wasn't representative of low-error tolerance programming.
There are some industries or applications of programming that I believe Dijkstra's "discipline" are warranted as well. NASA Rovers, Health Industry embedded devices (ie dispense medication, etc), certain Financial software that transfer our money. These areas don't have the luxuries of incremental change after release and a more "Mozart Approach" is necessary.
I think the Mozart story confuses what gets shipped versus how it is developed. Beethoven did not beta-test his symphonies on the public. (It would be interesting to see how much he changed any of the scores after the first public performance.)
I also don't think that Dijkstra was insisting that it all be done in your head. After all, he wrote books on disciplined programming that involved working it out on paper, and to the same extent that he wanted to see mathematical-quality discipline, have you noticed how much paper and chalk board mathematicians may consume while working on a problem?
I favor Simucal's response, but I think the Mozart-Beethoven metaphor should be discarded. That shoe-horns Dijkstra's insistence on discipline and understanding into a corner where it really doesn't belong.
Additional Remarks:
The TV popularization is not so hot, and it confuses some things about musical composition and what a composer is doing and what a programmer is doing. In Dijkstra's own words, from his 1972 Turing Award Lecture: "We must not forget that it is not our business to make programs; it is our business to design classes of computations that will display a desired behavior." A composer may be out to discover the desired behavior.
Also, in Dijkstra's notion that version 1.0 should be the final version, we too easily confuse how desired behavior and functionality evolve over time. I believe he oversimplifies in thinking that all future versions are because the first one was not thought out and done rigorously and reliably.
Even without time-to-market urgency, I think we now understand much better that important kinds of software evolve along with the users experience with it and the utilitarian purpose they have for it. Obvious counter-examples are games (also consider how theatrical motion pictures are developed). Do you think Beethoven could have written Symphony No. 9 without all of his preceding experience and exploration? Do you think the audience could have heard it for what it was? Should he have waited until he had the perfect Sonata? I'm sure Dijkstra doesn't propose this, but I do think he goes too far with Mozart-Beethoven to make his point.
In addition, consider chess-playing software. The new versions are not because the previous ones didn't play correctly. It is about exploiting advances in chess-playing heuristics and the available computer power. For this and many other situations, the idea that version 1.0 be the final version is off base. I understand that he is rightfully objecting to the release of known-to-be unreliable and maybe impaired software with deficiencies to be made up in maintenance and future releases. But the Mozartian counter-argument doesn't hold up for me.
So, did Dijkstra continue to drive the first automobile he purchased, or clones of exactly that automobile? Maybe there is planned obsolescence, but a lot of it has to do with improvements and reliability that could not have possibly been available or even considered in previous generations of automotive technology.
I am a big Dijkstra fan, but I think the Mozart-Beethoven thing is way too simplistic as well as inappropriate. I am a big Beethoven fan too.
I think it's possible to appear to employ Mozart programming. I know of one company, Blizzard, that doesn't release a software product until they're good and ready. This doesn't mean that Diablo 3 will spring whole and complete from someone's mind in one session of dazzlingly brilliant coding. It does mean that that's how it will appear to the rest of us. Blizzard will test the heck out of their game internally, not showing it to the rest of the world until they've got all the kinks worked out. Most companies don't take this approach, preferring instead to release software when it's good enough to solve a problem, then fix bugs and add features as they come up. This approach works (to varying degrees) for most companies.
Well, we can't all be as good as Mozart, can we? Perhaps Beethoven programming is easier.
If Apple adopted "Mozart" programming, there would be no Mac OS X or iTunes today.
If Google adopted "Mozart" programming, there would be no Gmail or Google Reader.
If SO developers adopted "Mozart" programming, there would be no SO today.
If Microsoft adopted "Mozart" programming, there would be no Windows today (well, I think that would be good).
So the answer is simply NO. Nothing is perfect, and nothing is ever meant to be perfect, and that includes software.
I think the idea is to plan ahead. You need to at least have some kind of outline of what you are trying to do and how you plan to get there. If you just sit down at the keyboard and hope "the muse" will lead you to where your program needs to go, the results are liable to be rather uneven, and it will take you much longer to get there.
This is true with any kind of writing. Very few authors just sit down at a typewriter with no ideas and start banging away until a bestselling novel is produced. Heck, my father-in-law (a high school English teacher) actually writes outlines for his letters.
Progress in computing is worth a sacrifice in glory or genius here and there.

Understanding Users - Does Performance Trump Looks?

It seems to me that whenever a GUI (Graphical User Interface) is involved, the look and feel of the interface nearly always trumps the performance of the application.
Is this a universal phenomenon?
Sufficiently bad looks trump any level of good performance.
Sufficiently bad performance trumps any level of good looks.
This boils down to the psychology of your target audience and about the architecture of your application. If the GUI reacts quickly and is laid out in such a way that it is intuitive to the user (as opposed to the developer), then the underlying layers may not need to perform so well. If however, the user wants to get data from a database and they're left hanging while the data loads, they're going to feel very differently. Compare 2 web applications just as an example:
Application one feels quite responsive but under the covers things take longer than it appears on the surface - it uses AJAX to talk to Web Services. The Web Services aren't the quickest, but everything happens on callback (asynchronously), so the user isn't held up while fields populate. It doesn't impede their workflow. On a bad day when network performance deteriorates the background performance, sure it's noticeable, but user activity isn't impeded any further than normal.
Application two doesn't feel quite so responsive. Everything happens on postback, there's no AJAX or Web Services, on a good day the page loads are fairly quick. Of course, on every postback the user's workflow is impeded while they wait for the page to reload. On a bad day, network performance causes performance to deteriorate noticeably, further impeding the user.
Application one is far less likely to get complaints because the user isn't held up even though fields aren't loaded so quickly. The user can enter data and move on. Everything is handled asynchronously. Of course, in the background, the Web Service process may actually be slower than the full page refresh but the user isn't going to care so much.
From many thousands of hours writing software and directly interacting with my users - frequently those who aren't necessarily as computer literate as your average 10 year old I've noted these points that are key to getting acceptance from just such an audience [written from a user perspective]:
It must do what I want how I want it: Don't just read the spec and expect your code to meet exactly what it says on the paper. Really read what it says on the paper and understand what the user meant by that. Design to the underlying meaning of the words not the black and white of the ink on the paper. If you don't understand exactly what I meant, come and talk to me and I'll explain it until you do. I'll be less happy if you deliver software that missed my whole point than I will by your questions. I'll feel much happier if I get the feeling you're on my side by really trying to understand me.
It must assist my workflow and not impede it: It's great if all I have to do is push one button to complete what would've taken me an hour to do before, but if it freezes my computer for the 20 minutes it takes to complete the task, I'm not going to be a happy camper.
It must be intuitive to use: That means I don't want to have to wade through the documentation you didn't provide me with in order to figure out how to use it. Neither do I want a 20 minute explanation that I'm going to forget 3 minutes after you walk out of the door. Design the software such that my 10 year old could figure it out as easily as they can program the PVR. It means that I should interact with it in a manner that seems logical to me as the person that will be using it day in day out. It doesn't matter if it's functionally correct, if I can't figure out how to use it, I'm not going to use it, much less pay for it.
It must be responsive: I don't want to have to click a button and then wait 10 seconds for a list to load and then select an item from that list and then wait for another screen to load before I can select an action to complete on that item which then takes 5 minutes to complete. Find a way to load the data quickly - if you can't load the data quickly in response to my action, then figure out a trick to make it feel like the data is loading quickly - perhaps by loading it in advance in the background and only displaying what I need displayed in response to my action... my point is, I don't care what you do, just make it appear like it's doing it quickly.
It must be robust: It doesn't matter what I throw at it, it should accept it and move on. If I do something wrong or put something incorrect in a field, tell me - IN PLAIN ENGLISH!! I don't care about buffer overflows or IOExceptions thrown at line 479 while attempting to open a file. Just handle it and tell me what I did wrong in language I understand.
Give me documentation: Okay, I know I'm not going to read it, and I'm more likely to pick the phone up and call you than remember where I put it when you gave it to me. But knowing its there makes me feel warm and fuzzy inside. It shows that you cared about the software enough - and me enough to write instructions that I can reference oustide business hours when you're not available.
Price: This depends entirely on your audience, but in my experience, if you met all of the above points, price tends to be far less of a concern than it might appear on the surface.
Although "you can't judge a book by its cover", people often do with software. I don't know if I would say this is "universal", but certainly common.
I don't think it's even a true phenomenon. I don't care how zippy the "look and feel" is, if it takes second to echo a keypress, the UI experience will suck. If it takes a long time to repaint the page for small changes, the UI will suck.
Now, as long as the response time of the application is less than some amount, then the look and feel will be a big part of the satisfaction.
Check out some of Jakob Neilsen's books on this.
Isn't it a bit of a false dichotomy? If the look and feel of an application isn't clean, well-organized and effective then you don't have a high-performing application. No matter how fast it may be.
I've found that the best combination is a snappy and easy-to-use GUI. This doesn't necessarily mean your app has to have great performance, but having the GUI freeze on you is a kiss of death. The iPhone's Safari does this well--you can continue to scroll around the screen even if the rendering engine can't keep up with you. Yeah, the no-content hatch marks are ugly, but at least the user knows he's in control.
I think it depends on the users. I work in a medium sized company in the IT department constructing web based software for consumption by the employees of said company. The users range from Human Resources, Manufacturing, R&D, Sales, Finance, to making applications for the CEO. Each of the different departments and users within those departments seem to have different criteria for what makes a quality application.
For instance, a Human Resource department usually deals with a lot of textual data. They spend heaps of time inputting things into forms like employee information, entitlements, recruiting etc. These types of users might opt for the look and feel of an application for this purpose, they want an aesthetically pleasing design that is easy to navigate and intuitive.
On the other hand a department like finance might favor performance in their reporting tools. I have had a few experiences with large SQL tables with complicated queries that took a considerable amount of time to execute. Users that run these kinds of reports many times a day soon get fed up of waiting and would gladly lose a bit of interface intuitiveness in exchange for time that could be spent on other tasks.
So, i would say that you can't make a blanket statement like "All users prefer a speedy application" or "All users like pretty applications". It really depends on the users preferences, their job requirements, and the applications purpose.
Balance is everything.
The UI needs to look respectable, professional and flow similar to other applications so the user has a common experience, thus little learning curve. It shouldn't have unecesssary whistles and bells unless specifically requested.
The performance should be at least tolerable. If you have extra time in a project, I would spend that time speeding it up unless the user specifically asks for UI enhancements. Many times, whistles and bells can compromise performance as some UI enhancements require additional CPU time AND sometimes add awkwardness to the app. At first glance, some of these apps look cool but long term usability suffers in favor of the NEATO factor.
Important for the user is that using the program is fun. The program should not only be able to do what I want it must feel good to use the program.
Making the user wait at moments the user does not understand or foresee isn't fun.
Crashing and making errors isn't fun either.
But looking good and helping me doing my task through the look and let me work fast and without work flow interruptions is fun.
Programmers often think that programs that are slow and use much memory are bad and they measure all their software on memory usage and the use of the processor. But most of you users won't start top or the windows task manager and look at the footprint of your program they will use it and if if feels good to use the program, and the rest of their computer with the program running they will fell good to.
One thing I read about often is the usage of as many CPUs as possible to get a task for the user done in the shortest time. Is this high performance? Your program is very fast. But the Computer is very slow at the moment and switching to the email program because I know the task will take its time is a pain in the ass. So sometimes you may want to free some resources to improve the feeling of your program. Even if that would slow down your own program.
The most important are price, functionality, compatibility, and reliability.
Looks and performance are both, relatively unimportant and in practice they are both therefore unable to "trump" anything:
Compatibility: for example, in the real world I use MS Word, not because it's fast or pretty but because it's compatible with everyone else who uses it.
Functionality: when I want to book a train in France, I use http://www.voyages-sncf.com/ not because it's fast or pretty (or even outstandingly reliable) but because it has the necessary functionality.
Reliability: if an application crashes then I'm probably not going to use it again, no matter how fast it crashes, or how nice it looked before it crashed.
Price: etc. (say no more).

Resources