Meta-composition during music performances [closed] - algorithm

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
A couple of weeks ago, my piano teacher and I were bouncing ideas off of each other concerning meta-composing music software. The idea was this:
There is a system taking midi input from a bunch of instruments, and pushes output to the speakers and lights. The software running on this system analyzes the midi data it's getting, and determines which sounds to use, based on triggers set up by the composer (when I play an F7 chord 3 times within 2 seconds, switch from the harpsichord sound to the piano sound), pedals, or actual real-time analysis of the music. It would control the lights based on the performance and sounds of the instruments in a similar fashion - the musician would only have to vaguely specify what they wanted - and real time analysis of their playing would do the rest. On the fly procedurally generated music could play along with the musician as well. Essentially, the software would play along with the performer, with one guiding the other. I imagine that it would take some practice to get use to such a system, but that it could have quite incredible results.
I'm a big fan of improv jazz. One characteristic of improv that is lacking from other art forms is the temporalness of it. A painting can be appreciated 10 or 1000 years after it has been painted, but music (especially extemporized music) is about the performance as it is the creation. I think that the software that I described would add a great deal to the performance, as with it, as playing the exact same piece would result in a completely different show each time.
So, now for the questions.
Am I crazy?
Does software to do any or all of this exist yet? I've done some research and haven't turned up anything. The key to this system is that it is running during the performance.
Were I to write something like this, would a scripting language such as Python be fast enough to do the computations that I need? Presumably it'd be running on a fairly quick system, and could take advantage of the 2^n core processors Intel keeps releasing.
Can any of you share your experience and advice concerning interfacing with musical instruments and lights and the like?
Have any ideas or suggestions? Cold and harsh criticism?
Thanks for your time in reading this, and for any and all advice!
(And sorry for the joke in the tags, I couldn't resist.)

People have used Max MSP to do this kind of thing with Midi and creating video accompaniment, or just Midi accompaniment. It's a completely domain specific app, that probably was inspired by small talk or something, which barely any real programmer could love, but musician-programmers do.
Despite the text on the site I just linked to, and the fact that 'everyone' uses the commercial version, it wasn't always a commercial product. Ircam eventually released it's own lineage. It's called jMax. PureData, mentioned in another post here is another rewrite of that lineage.
There's also CSound; which wasn't meant to be real-time, but is likely able to be pretty real-time now that you have a decent computer compared to where CSound started.
Some people have also hacked Macromedia Director extensions to allow for doing midi stuff in Lingo... That's very outdated, and hence some of them have moved to more modern Adobe environments.

Look at PureData. It can do extensive midi analysis and folks use it for performance.
Indeed, here's a video that flashes past a puredata screen. It shows someone interacting with a rather complex instrument using PD.
Also, look at CSounds.

I have used PyAudio quite extensively for dealing with raw audio inputs, and found it to be very unpythonic, acting much more like a very thin wrapper over C code. However, if you're dealing with midi, rather then raw waveforms, then your tasks are quite a bit simpler, and python should be quite fast enough, unless you play at 10000 beats per minute :)
Some of the issues: detecting simultaneity, harmonic (formal - i.e., chord structure) analysis.
This is also an 80/20 problem that if you restrict the chord progressions allowed, then it becomes quite a bit simpler. After all, what does "playing along" mean, anyway, right?
(Also, at electronic music conf's I've been too, there are lots of people doing various real-time accompaniment experiments based on input sound and movement). Good luck!

You might also look at ChucK and SuperCollider, the two most popular 'real' realtime music programming languages.
Also, you might be surprised at how much you can accomplish with Ableton Live racks.
(and it's CSound. No 's' at the end)

see also:
Keykit
Arx
I have no idea if the second one is actually real or worth looking at. Keykit, however, is.

You might contact Gary Lee Nelson in the TIMARA department at Oberlin. 20 years ago I did a project that auto-generated the rhythm section for 12 bar blues and I recall him describing a tool that he knew of that did essentially what you're describing.

You might be interested in GenJam

The answer to your question is no - you're not crazy.
Similar systems exist, but your description is pretty
vague to begin with so it's not much of a spec to judge against.
I suggest you start writing a prototype and see how it does.
Something extremely small and simple.
Existing systems be damned.
I'm using c++ on win32 api (no mfc).
Started writing my sequencer back on the Amiga500.
It doesn't do lights, but there's plenty to do in just music.
Good luck to you.
It's an EXTREMELY fun project.
I'd say -don't- pattern your project on how other projects work.
Because, if you ask me, they don't work so great ;)
And the fun is being able to do something different.

Related

UI Design/UI Components [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've been thinking about the on going "revolution" in UI design and metaphors for interacting with the computer via a GUI and I'm suprised that as long as computers have been accessible through GUI's that programmers are still searching for the best way to allow the user to interact with their programs. It seems that most of the work centers around astetics(which I understand are important) but I don't understand why we are still looking for the magic bullet in UI design.
My question is: Why is UI design and components not a solved problem with accepted and understood approaches?
Probably because like most things, design (and tech, in general) are constantly changing, being worked on and revised. To say that one of the most crucial elements in software can be 'solved' would be an understatement and would be constantly changed again. There is no true definition to the 'perfect' GUI, only because you don't know who your users will be (power users versus casual, more input required vs less).
perfection is a moving target
Jacob Nielsen rightfully said about ten years ago that users don't scroll. This isn't true anymore.
Users get trained to user interfaces. Windows 7 doesn't show a system menu icon in the top left corner for many apps (e.g. in explorer), but you can still go there and invoke the system menu. Took me a while to notice the icon was missing for some apps - while using it.
(There are probably much better examples.)
The optimum isn't obvious. Consistency is core in UI, but only deviation from consistency can lead to improvements. You just can't optimize for "most consistent" or "most creative", both will fail.
it's a cross-domain skill. How many people are programmers, designers and neuroscientists? How many CS university courses teach cognitive models and how they apply to user interfaces? How many programmers pondered muscle memory, feedback loops and cognitive load?
UI's are still designed largely by programmers and sometimes fixed by designers after the fact.
effect is hard to measure
Take the Microsoft Office Ribbon: Judging from the responses, it seems to work better for many, yet is harder by orders of magnitude for others. It was a bold step, no doubt, but was it good? Microsoft does run UI tests, and they did it for the ribbons - whether they screwed up the tests, whether office politics won over facts, or wether the backslash was just wasn't forseeable in the data, I don't know. (But I'd seriously like to)
How many shops can afford user tests? Everyone can do hallway usability, but that just ensures you don't suck.
Skimming along the line
There is low pressure for the perfect UI, there is high pressure for a good enough UI. Given the lack of common knowledge and the high cost of improvement, perfect would not be affordable. The "Apple tradeoff" involves a higher price and technical shortcomings. They are pushing the limits (good!) with bold steps (very good!), which captures a notable but not major market segment. Still they are far from perfect.
I think if you ask Henry Ford the same question about designing automobiles you would have gotten an answer that would equally apply to your question today.
And that answer is, we're still in the infancy of human computer interaction design and we don't yet have enough data to design genuinely ideal systems. And, even if we did we don't yet have the ability to manufacture such an ideal system at an affordable price point.
Much like Henry Ford could not have designed the Bugatti Veryon in his day, nor could he have built it if he could design it. Or the Prius for that matter.
No, User interfaces isn't that subjective. Ergonomical matter is a scientific topic.
Think about that :
Today, everybody uses a computer. That was not the case 30 years ago.
Today , everybody uses a glass surface to access data. That was not the case 30 years ago.
Today, you've got several devices to access your data. That was not the case 30 years ago.
Today, data is collected everywhere. That was not the case 30 years ago.
Today, you can even control your data with glasses. That was not the case 30 years ago.
There is no magic bullet. just like nature, we're talking about an evolutive and living ecosystem, in the pure darwinian way.
UI design is to make people who have less knowledge about it but can easily understand the application and use it comfortable. That is core challenge of the UI design. So it evolves just like a robot. There is no end to perfect design. As along it makes the users to use easily then it is a perfect design.
User Interface is a very subjective subject, what might be ideal (graphically pleasing, efficient) for one person or task might not be ideal for another task or even another person doing the same task.
Also, the different platforms on which GUIs are implemented is ever changing and thus needing GUIs to evolve to meet specific platform demands (touch screens, ie. lend themselves towards a completely different user interface, then a mouse based platform, or even something like an ATM)
However, there are classes and books written on the subject, so there is some level of continuity in the area that has been there for quite some time.
User Interface is a very subjective subject, what might be ideal (graphically pleasing, efficient) for one person or task might not be ideal for another task or even another person doing the same task.
Also, the different platforms on which GUIs are implemented is ever changing and thus needing GUIs to evolve to meet specific platform demands (touch screens, ie. lend themselves towards a completely different user interface, than a mouse based platform, or even something like an ATM)
However, there are classes and books written on the subject, so there is some level of continuity in the area that has been there for quite some time.
In short, TECHNOLOGY.

Why did not CASE tools succeed? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
...or why did they fail?
I am going to build a proof of a concept of something which could be classified as CASE, but I want to avoid some of the mistakes done before.
Thanks!
First, I think diagrams provide real value when they're small and simple. Large, highly detailed diagrams mostly waste a lot of paper, time, hard drive space, etc. A pencil and paper work quite nicely for diagrams that are small enough (and simple enough) to be useful. A software tool only helps when you're producing a diagram that's so large and complex that it's practically guaranteed to be useless.
Second, with most CASE tools, the fastest way to draw a diagram is to start by writing some (possibly simplified, mockup) code, and then "reverse engineer" the diagram from the code. Drawing the diagram directly is often slower than writing the code. To provide much real value, producing the high level diagram has to be quite a bit simpler than writing equivalent code.
When you get down to it, I've rarely seen CASE tools used as an actual "aid" to "software engineering" anyway. In most cases I've seen, the software engineering is done entirely separately, and the CASE tools were used to reverse engineer diagrams from code that was already written. The people producing the diagrams generally found them useless, and included them in reports to higher-level managers for "wow factor". The only "aid" they hoped for from the diagrams was impressing management with the complexity of what they were doing in the hope of increasing funding (some included diagrams of things like portions of the standard library, purely to add to apparent complexity).
As to how the tools failed at the software engineering part, I don't know of a single simple answer -- from what I've seen, I'd say it's more of a "death of a thousand nicks", than any single, glaring problem. If I did have to point to a single large problem, it would be that the ones I've looked at don't really take Patterns into account. Just for example, what I'd like is to work at an even higher level of abstraction, so I can point to some functionality, and play with things like "how would things look if I were to implement the following parts of that functionality as decorator classes?" Yes, I can draw one diagram with them as decorator classes, and one without, but I don't have a really quick, easy way to say "transform this entire hierarchy to move X, Y, and Z into decorator classes."
Contrast a typical CASE tool with a spreadsheet. In a spreadsheet, I can change one cell, and it will automatically recalculate how that affects anything else in the spreadsheet that depends on it. By contrast, CASE tools seem (at least to me) stuck at roughly the level of a grid control, where I can make changes in a cell, but I still have to manually track what other cells depend on that one, and what formulas to use, and calculate and modify all the affected cells by hand. Yes, if I want to print out a sheet of the right values, being able to edit them on the computer so I don't have eraser marks in the cells and such would be an improvement -- but only a small improvement, not the kind that turned personal computers from toys for a few hobbyists into a staple of essentially every business on earth.
If you look at the Wikipedia entry: http://en.wikipedia.org/wiki/Computer-aided_software_engineering then you'll see the "classic" tools from the 1990s. Having worked with many of those tools, I would suggest that the focus on commercialisation fragmented the market. Typically, not only did you pay huge sums for the tools, but then for consulting, training and run-time environments. With so many tools on offer, it was hard to build a competent team specialising on given tool.
Furthermore, it didn't help that the tools were over-sold. Promising managements unrealistic increases in productivity. There isn't any other area of IT where I've seen so much shelf-ware - products used for one project and then abandoned, often along with the project too.
The concepts of CASE live on in Eclipse and many other MDE tools. The problems of steep learning curves and fragmentation have still not been solved. Whilst the cost of the tools has reduced (to free in many cases) the training, consulting and ramp up costs are still there.
Before you expend a lot of effort on your CASE tool, have a look at the fields of MDA, MDE, DSL, even UML. Its worth browsing the OMG web site as well.
At the end of the day you should focus on the what you produce and not the tool. If you are able to automate some tasks then that's good. Building yet another CASE-like tool is a great intellectual exercise, but with minimal chances of commercial success. After all IBM, Oracle and Computer Associates have only had sporadic successes with their tools and they are still vigorously marketing them to enterprise customers.
I worked with Knowldegeware back in the early 90's. My simple answer to the demise of CASE is that as soon as you printed the model it was old. Keeping the model and the code in synch became impossible. The first target platform was MicroFocus COBOL, but then came Client-Server 94-95, followed by the internet 97-98 and nobody really wanted to use CASE with those new platforms.

What learning habits can you suggest? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Our profession often requires deep learning; sitting down and reading, and understanding. I'm currently undergoing an exam period, and I'm looking for ways to learn more effectively.
I'm not asking about what to learn, or whether to prefer blogs over books, etc. My question is much more physical than that -
What do you do when need to study, and I mean study hard?
I'm looking for answers such as
I slice my time to 2.5 hours intervals and make a break between them, but never during.
I keep a jar of water nearby.
I wake up at 6 o'clock sharp and start my day with a session at the gym.
What good learning habits did acquire, or wish you had acquired?
(I know this isn't strictly programming related, but it is programmers related)
I find the best way to set yourself up for learning is:
Get plenty of exercise and rest
Eat a balanced diet with little sugar and caffeine
Try to find a quiet area conducive to concentration
Try to practice what you learn from a book - theory is ok, but practice embeds the knowledge.
"The best way to test whether or not you know something is to try to explain it to someone else from scratch."
That has to be one of the best ideas I was ever given about knowing whether or not I really know something.
Make sure it's quiet and comfortable around you.
enough to drink and eat
don't just read but take notes, draw mindmaps and the like
don't just read but try
make a report, blog entry, presentation out of what you learned
learn the same thing from different sources: Don't just believe what person A has to say look for, read and understand the opinion of person B and C as well and understand the differences.
take nothing for granted
apply the knowledge to an actual project
I set aside time every day or x times per week. Otherwise I never do it.
I try to have a diff location every once and awhile (home / coffee shop / stay late at the office) keeps the tedium away.
I am careful about the music I use, try to make sure it is relaxing
Sometimes I leave headphones on even if music is off that way people don't talk to me.
I give myself specific goals before each break or for every session.
I reward myself
Read Andy Hunt's "Pragmatic Thinking and Learning" - lots of good suggestions there.
Tony Buzan wrote The Mind Map Book, describing his ideas for note taking. He also has some process ideas that I found very helpful. Foremost was this one: study new material for an hour, then take a short break (5-10 minutes) and then review the new material. Review the new material again an hour later, then two hours after that, then the next day. You need to refresh your exposure to the material multiple times over the course of time to really take it in.
While reading, I make sure I fully comprehend each section in the text before I continue.
Set up a time and take away all distractions. At home/dorm can be rough, so I would use the library while in college. Getting started is usually the hardest part, so don't set a specific time limit for when to quit. It is distracting, and you will be looking at your clock to see "how much longer you have." You don't want to be in that mindset. Work until you feel your brain is starting to lose focus, whether that is lack of comprehension or having to reread simple paragraphs repeatedly. Take a minute or two as a break, then refocus your mind. When that stops working, it is time for a real break.
Are you a night owl or early bird? Each person is different. I am very productive early in the morning, while my wife can barely function. We all have differences, so don't try to fight your nature.
+1 to what everyone else said. Making cheat sheet/notes were a very important part of my studying habits, whether I could use them on the exam or not.
I find it best to be fully dedicated to the task before I start. If you go into something with low morale, you will not do well. If you go into a task with high morale, you will understand things quicker, be able to apply it to real situations better, and will have a deeper comprehension. If I am not fully dedicated, I don't even bother. I've got better things to do with my time then half ass something, and you probably do too.
Expanding on a comment I left, see this article from Electronic Design from a few years ago: http://electronicdesign.com/Articles/Index.cfm?AD=1&ArticleID=5859.
If you can teach what you learned, you know it.
by Louis Frenzel
All learning is self-learning. Professors, trainers, and all teachers just organize and present the material to be learned. They don't teach it to you. You learn it. You're the one who actually absorbs, understands, and assimilates the knowledge by listening to the lectures, reading, thinking, solving problems, and other activities. Self-learning is a natural, human quality. While most of you have used this method in the past, you may want to do it on a more formal basis to speed up and fine-tune your methods. Here's a suggested approach (and trust me on this, you must write it down):
Clearly identify what you want to learn. Write it out.
Write some learning objectives for yourself. These statements clearly identify what you want to know and be able to do. For example, you should write something like "When I complete this learning assignment, I will be able to design and program an FIR DSP filter." The objectives should be expressed in "behavioral" terms, that is, using words that state some measurable outcome.
Identify some initial resources. Start with books at the local bookstore or go to www.Amazon.com or Barnes & Noble at www.barnesandnoble.com. Most cities don't have good technical bookstores, and it's tough to find anything at regular bookstores. Consider yourself lucky if you have a good technical bookstore or a good college bookstore. Plan to get multiple books to give you greater breadth of coverage with multiple explanations, examples, and perspectives. Don't forget to look through your stack of magazine back issues.
Check out online sources. Do one or more Web searches, or go to relevant company Web sites. You may run across an appropriate tutorial, white paper, or application note that will give you what you need. The large semiconductor and equipment manufacturers have tons of stuff on their Web sites, so start digging. Also check out the professional societies and other sources listed in the tables and sidebars.
Watch out for any conferences or seminars on this topic. Usually, such events never occur when you need them, but you might get lucky. If you find one, attend because it will provide a big head start for your own learning.
Organize your materials. Lay them out, mark them up, and then make an outline based on your objectives. See what you have and what you lack, and make an initial list of things to do.
Dig in. Set aside an hour a day or whatever you can to go through the materials. Turn off the radio, CD player, and television. Make a habit of finding some quiet time to read and learn.
Look for a human tutor. You could be working just down the hall from an expert on the very subject you're trying to learn. Pick his or her brain. Ask this person if he/she will help you understand and learn. Take this person to lunch or offer to pay for lessons. Most people will gladly share what they know, if you aren't too proud to ask. The best way to do this is to learn as much as you can on your own. Then, go for the professional, personal help with tough questions or when you get stuck.
Include some hands-on. Is there any hardware you can buy or put together to help you learn it? Maybe there's some software that will help. Buy it or have your employer buy it.
Write a paper or article or teach what you have learned. You have to know it to write it or teach it. There's no better way to learn for yourself than to have to explain it to others.
I generally find it useful to build a "cheat sheet" or other form of summary as I go.
On one hand, it forces me to reinterpret information and figure out what's really important.
On the other hand, I'm building a useful study tool for last-minute revisions, or future refreshing of knowledge.
2.5 hours without breaks seems like a lot. Experiment with different time slots to see what works best (personally, a sequence of 3x(20 work - 5 break) works well)
Pomodoro Technique for time management
http://pomodorotechnique.com/get-started/
(watch "how" tab)
And the code(for programmers)
while(day) {
for (int i = 0; i < 4; i++) {
work(25);
relax(i < 3 ? 5 : 15);
}
}
You should implement methods
void work(int minutes);
void relax(int minutes);
Variable day changes from other threads

Is the UI a valid indicator of internal quality? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
When looking at the myriad types of software written at our company, I instantly jump to conclusions of the quality of the entire product based on the UI. If I find misspellings, weird tab orders, fields not lined up, odd colors, I assume that the entire application is of poor quality.
I'm assuming that if the programmer doesn't care enough to make the outside look good, that they don't care enough at all. I am NOT assuming if the UI looks good that the application does what it should, although I am not immediately down on it -- it gets more leeway when it's being evaluated.
Is this a valid decision to make? For commercial software as well?
It may or may not be. But that's not really relevant. To your end user, crappy UI = bad code.
I think it's a good indicator of the care that a developer has for their work - basically a sense of professional pride.
It's a given that most devs don't make fantastic UI designers, but there are a basic set of rules that should be followed when developing professional software and these apply as much to the UI as they do to the internals.
So, basically I agree with you.
IF the application was written by one developer its not an unfair assumption that a slovenly UI is indicative of the underlying code quality.
However if it was written by a team of 5 or 7 or 13 there will likely be a wide range of quality under the hood (it just might be the newbee was given the UI).
Also if the app is 5+ years into its lifecycle with maintenance being performed by FBN contractors or interns or whoever is handy you may find a lot of good code under the hood thats slowly rotting because of indifferent management and undisciplined developers who just throw a "patch" at it, compile it, check it back in and throw it over the wall to production.
A crappy UI can be indicative of a lot of things, none of them good, some worse than others.
In my opinion it is a valid decision. And you are right when you say that good looking software is not necessarily good software internally.
But definitely, if the programmers don't care about the usability of the program, most likely they won't care about it's functionality.
If the UI is riddled with typos and inconsistencies, it is probably fair to say that the QA process and project management were a bit lacking. Doesn't really infer that the codebase is riddled with bugs.
In a commercial product, it most likely means that less people will buy it, so whilst sales are not really a quality metric, they're pretty important in the overall scheme of things.
People are more likely to buy things that look good, behave as they expect them to, and "Don't make them think".
Many programmers suck at UI design, and that's not their fault, it does not mean they suck at coding. They're just generally more interested in the internal beauty of what they make, otherwise, they'd be liberal arts majors instead.
It really depends. I know of software developers who are excellent at just about all aspects of design and implementation but have lousy UI skills. Many times the UI is an afterthought as a nod to the user. In the cases of scientific software or other software where the processing is central or key, it might not be a good idea to judge the quality of the rest of the code by the UI. However, overall - it might be a good indicator that the software company has not done its job well.
It all really depends on each case, but if the UI is not usable or a pain in the neck, then the underlying code is harder to use and not worth the time I suppose.
The opposite is also not true - flashy, beautiful UIs do not mean that the underlying code is good at all. Anyone can wrap a piece of junk with a nice UI.
I'd agree with the masses here. Poor UI mean that the product development team dropped the ball.. That said. I consider myself a good coder. Great at math, but dyslexic and attention deficit disorder.. Give me a set of earphones and some code and I'm on my way. Don't however expect me to mock up a great GUI. Line things up.. That I do.
Now ADD to that the fact that as the "programmer" even when I see things in the GUI that bug the crap out of me (as a person who uses it), I don't get to fix them.. Hell when I do fix them I get QA asking me for the design document and the approval from on high. After a while I stoped caring about the GUI..
I write solid code, that works. It's fast, clean and small.. it's where I get to have an impact. The GUI is beyond my pay grade. :(
In my experience it's usually the other way around. You get good quality UI's by having people who spend "huge" amounts of time focusing on widget behavior and look&feel instead of domain model or automated tests.
Some of the best quality systems I've worked with had auto-generated UIs, that were rather unpleasant to use.
As much as I really want to say, "Yes, absolutely," it's not always a valid conclusion. The programmer or QA team may have an excellent understanding of the application but a terrible grasp of the language or presentation.
Some people simply focus on what they consider to be important—and get it fairly close to perfect—and all but ignore what they consider "fluff" or "window dressing."
But I do have very a strong tendency to pre-judge the overall quality of the software based on first impressions.
No, UI is not indicative of internal code. Many a time's we come across things that are shiny and look cool but serve no purpose. Think of it as a seeing a Ferrari parked at the store. It looks awesome and you wonder what it would be like to get behind the wheel -- only to find out it's a body kit slapped on a 1980 late model Acura that has 500k miles on it.
A personal example, at my current employer, we have stellar code in our software (and I say this subjectively since I was not there for 99% of it's creation). But when you look at our UI, it can seem a bit old and rusty. That and many a times the UI developers don't even touch much of the internal code.
"I'm assuming that if the programmer doesn't care enough to make the outside look good, that they don't care enough at all" - I don't believe this to be true. I think most programmers look for functionality as opposed to shininess as they tend to be creatures of logic, not artists.
Take linux as an example -- stellar internal code, but UI was lacking for a long time, thus why no one in the mainstream has used it extensively as opposed to Windows or Mac.
Short version: !UI.Equals(InternalQuality)

Good tips for a Technical presentation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am planning to give a Technical presentation for a product we are building.
Intended audience is Technical developers. So, most of the time, I will be debugging trough the code in Visual Studio, performance analysis, some architecture review etc.
I have read couple of blogs on font sizes to use, templates to use on Visual Studio, presentation tools, among other very useful tips.
What I am looking specifically for is how to keep the session interesting without making it a dry code walkthrough? How to avoid making people fall asleep? Would be great to hear some stories..
Update1: Nice youtube clip on zoomit. Glue Audience To Your Presentation With Zoomit.
Update2: New post from Scott Hanselman after his PDC talk - Tips for Preparing for a Technical Presentation
Put interesting comments in the code.
// This better not fail during my next presentation, stupid ##$##%$ code.
Don't talk about them, let them be found by the audience.
-Adam
FYI, that Hanselman article has an update (your link is from 2003).
Use stories. Even with code examples, have a backstory: here's why someone is doing this. To increase audience participation, ask for examples of X where X is something you know you can demo, then phrase the walk-through in those terms.
Or maybe you have war stories about how it was different or how it normally takes longer or whatever. I find people identify with such things, then as you give your examples they're mentally tracking it back to their own experience.
I recommend Scott Hanselman's post (previously mentioned). I've written up a post with some tips, mostly for selfish reasons - I review it every time before I give a technical presentation:
Tips for a Technical Presentation
If you're using a console prompt, make sure the font is readable and that your paths are preset when possible.
Take 15 minutes to install and learn to use ZoomIt, so your audience can clearly see what you're showing off. If you have to ask if they can see something, you've already failed.
Probably most important is to have separate Visual Studio settings pre-configured with big, readable fonts.
One of the best pieces of advice I ever got for doing demos is to just plain record them in advance and play back the video, narrating live. Then the unexpected stuff happens in private and you get as many stabs at it as you need.
You still usually need some environment to use as a reference for questions, but for the presentation bit, recording it in advance (and rehearsing your narration over the video) pretty much guarantees you can be at the top of your game.
I also like to put small jokes into the slides and that recorded video that make it seem like the person who made the slides is commenting on the live proceedings or that someone else is actually running the slides. Often, I make absolutely no reference at all to the joke in the slide.
For instance, in my most recent demo presentation, I had a slide with the text "ASP.NET MVC" centered that I was talking over about how I was using the framework. In a smaller font, I had the text "Catchy name, huh?". When I did that demo live, that slide got a chuckle. It's not stand-up worthy by any stretch of the imagination, but we're often presenting some pretty dry stuff and every little bit helps.
Similarly, I've included slides that are just plain snarky comments from the offscreen guy about what I'm planning to say. So, I'll say, "The codebase for this project needed a little help", while the slide behind me said "It was a pile of spaghetti with 3 meatballs, actually" and a plate of spaghetti as the slide background. Again, with no comment from me and just moving on to the next slide as though I didn't even see it actually made it funnier.
That can also be a help if you don't have the best comedic timing by taking the pressure off while still adding some levity.
Anyway, what it really comes down to is that I've been doing most of my demo/presentation work just like I would if it was a screencast and then substituting the live version of me (pausing the video as appropriate if things go off the rails) for the audio when I give it in front of an audience.
Of course, you can then easily make the real presentation available afterward for those who want it.
For the slides, I generally go out of my way to not say the exact words on the screen more often than not.
If you are showing code that was prepared for you then make sure you can get it to work. I know this is an obvious one but I was just at a conference where 4 out of 5 speakers had code issues. Telling me it is 'cool' or even 'really cool' when it doesn't work is a tough sell.
You should read Mark Jason Dominus excellent presentaton on public speaking:
Conference Presentation Judo
The #1 rule for me is: Don't try to show too much.
It's easy to live with a chunk of code for a couple of weeks and think, "Damn, when I show 'em this they are gonna freak out!" Even during your private rehearsals you feel good about things. But once in front of an audience, the complexity of your code is multiplied by the square of the number of audience members. (It becomes exponentially harder to explain code for each audience member added!)
What seemed so simple and direct privately quickly turns into a giant bowl of spaghetti that under pressure even you don't understand. Don't try to show production code (well factored and well partitioned), make simple inline examples that convey your core message.
My rule #1 could be construed, by the cynical, as don't overestimate you audience. As an optimist, I see it as don't overestimate your ability to explain your code!
rp
Since it sounds like you are doing a live presentation, where you will be working with real systems and not just charts (PPT, Impress, whatever) make sure it is all working just before you start. It never fails, if I don't try it just before I start talking, it doesn't work how I expected it to. Especially with demos. (I'm doing one on Tuesday so I can relate.)
The other thing that helps is simply to practice, practice, practice. Especially if you can do it in the exact environment you will be presenting in. That way you get a feel for where you need to be so as not to block the view for your listeners as well as any other technical gotchas there might be with regards to the room setup or systems.
This is something that was explained to me, and I think it is very useful. You may want to consider not going to slide heavy at the beginning. You want to show your listeners something (obviously probably not the code) up front that will keep them on the edge of their seats wanting to learn about how to do what you just showed them.
I've recently started to use Mind Mapping tools for presentations and found that it goes over very well.
http://en.wikipedia.org/wiki/Mind_map
Basically, I find people just zone out the second you start to go into details with a presentation. Conveying the information with a mind map (at least in my experience), provides a much easier way for the information to be conveyed and tied together.
The key is presenting the information in stages (ie, your high-level ideas first, then in more detail, one at a time). The mind-mapping tools basically let you expand your map, as the audience watches and your present more and more detailed information. Doing it this way lets your audience gradually absorb the data in smaller stages, which tends to aid retention.
Check out FreeMind for a free tool to play with. Mind Manager is a paid product, but is much more polished and fluent.
Keep your "visual representation" simple and standard.
If you're on Vista hide your desktop icons and use one of the default wallpapers. Keep your Visual Studio settings (especially toolbars) as standard and "out of the box" as possible. The more customizations you show in your environment the more likely people are going to focus on those rather than your content.
Keep the content on your slides as consisce as possible. Remember, you're speaking to (and in the best scenario, with) your audience so the slides should serve as discussion points. If you want to include more details, put them in the slide notes. This is especially good if you make the slide decks available afterwards.
If someone asks you a question and you don't know the answer, don't be afraid to say you don't know. It's always better than trying to guess at what you think the answer should be.
Also, if you are using Vista be sure to put it in "presentation mode". PowerPoint also has a similar mode, so be sure to use it as well - you have the slide show on one monitor (the projector) and a smaller view of the slide, plus notes and a timer on your laptop monitor.
Have you heard of Pecha-Kucha?
The idea behind Pecha Kucha is to keep
presentations concise, the interest
level up and to have many presenters
sharing their ideas within the course
of one night. Therefore the 20x20
Pecha Kucha format was created: each
presenter is allowed a slideshow of 20
images, each shown for 20 seconds.
This results in a total presentation
time of 6 minutes 40 seconds on a
stage before the next presenter is up
Now, i am not sure if that short duration could be ok for a product demonstration.
But you can try to get some nice ideas from the concept, such as to be concise and keep to the point, effective time, space management etc..
Besides some software like Mind Manager to show your architecture, you make find a screen recorder as a presentation tool to illustrate your technical task. DemoCreator would be something nice to make video of your onscreen activity. And you can add more callout to make the process easier to understand.
If you use slides at all, follow Guy Kawasaki's 10/20/30 rule:
No more than 10 slides
No more than 20 minutes spent on slides
No less than 30 point type on slides
-Adam

Resources