Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
So I'm a coder, I do PHP, JavaScript, and Objective-C. I'm currently working on a website prettymuch full-time and, being 16, don't really have an 'office' except a desk in my bedroom... So I'm spending quite a long time in my room coding this website and I have a bit of a problem.
My bedroom is at the back of our house, with the windows facing West, so in the afternoon the sun shines in the windows heating up the room. Right now as I type this, it's 28.5 degrees C, but it does get as hot as 32 which is seriously too uncomfortable to work in.
Being the sort of geek I am, I was wondering whether it would be possible - or, feasible - to get a USB thermometer or the like, which is Mac-compatible, and then use AppleScript to detect when the temperature gets to a certain level. If, for example, it gets to 23 degrees, I would like a Growl notification to appear saying "Open the small window", at 25 degrees "Open both large windows", and at 28 degrees "Open all windows and the door", for example...
I think this'd be pretty neat, even if I am the only person who'd ever use it! So, is this sort of thing possible, and where would I get a USB thermometer (if they even exist...) from? eBay? I also realise, this question isn't directly related to programming, but, I didn't really know where else to ask it... so, a programmey-question would be is it possible to fetch data from USB devices via AppleScript? - there, that'll do.
Cheers!
Jack
P.S. For all you hardcore Arizonians or Texans or whatever, I live in the UK, and temperatures like 30 degrees make us pass out ;)
Cheers Matee, There are plenty of them rickey 'ol USB thermometers up on the Amazon and eBay and whatnot, and also Google Shopping for around $15 American (Thats right around 22 of your pounds, for those of you across the pond, coincidentally, that's 1.2x twice the weight of the original XBox). Anyway, as far as the code, I'd wager you're in a bit of a sticky wicket, 'ol top. So far as the eye can wander, AppleScript doesn't support much hardware integration, it's more of a macro thing, aye? Don't go thinking I'm all talk and no trousers, though, if'n I'm not no barrack-room lawyer, I've been doin' it for donkey's years. The mothercompany (God save the King) might implement it in the next release, but i'd wager they'd throw us a canary in a coal mine first. Whose to say, I'm just a man in a clapham omnibus, either way, drafting this comment has been right royal. You brits take care over there ;)
-See you anon
You won't be able to directly monitor a usb device using applescript. Even if you could access the usb outputs directly you wouldn't know how to convert the signals into a temperature. However, if the device is mac compatible it will come with software for the thermometer... and you will be able to monitor that software using applescript. A quick google search turned up this and it says the software is apple-scriptable.
http://practsol.com/thummac.htm
A little late reply here, but right now, in Texas, I've got my nephew digging a 60in X 16in X 4ft hole just outside my home office. Into it I'll be placing a 40ft copper tubing "radiator". This with be the "core" of a geothermal loop to cool my mac. Yeah, it's wildly watercooled because the mods cause me to exceed the TDP (it's a MacPro1,1) of the case, but even before that my office got uncomfortably hot. Now it just causes my computer to shut down when I try to encode video. Not good.
In the UK you should be able to do the same (well, cool your room at least) with just a ground loop, a cheap pump, and a fan with more copper tubing coiled behind it. My project has ended up costing me quite a bit because of its complexity and mistakes make along the way, but a simple system to cool just the room should cost less than £150-200 and is definitely a DIY project you and your dad could do.
One of of the biggest reasons to do this are the electricity (cooling) costs saved for me, literally several hundred $ per year. And it's "Green" cooling. But the biggest reason is for the amount of science I've learned in the past 3 months.
Talk to your Dad about it. It's been fun, and would be a good project for a young Scottish engineer!
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've been thinking about the on going "revolution" in UI design and metaphors for interacting with the computer via a GUI and I'm suprised that as long as computers have been accessible through GUI's that programmers are still searching for the best way to allow the user to interact with their programs. It seems that most of the work centers around astetics(which I understand are important) but I don't understand why we are still looking for the magic bullet in UI design.
My question is: Why is UI design and components not a solved problem with accepted and understood approaches?
Probably because like most things, design (and tech, in general) are constantly changing, being worked on and revised. To say that one of the most crucial elements in software can be 'solved' would be an understatement and would be constantly changed again. There is no true definition to the 'perfect' GUI, only because you don't know who your users will be (power users versus casual, more input required vs less).
perfection is a moving target
Jacob Nielsen rightfully said about ten years ago that users don't scroll. This isn't true anymore.
Users get trained to user interfaces. Windows 7 doesn't show a system menu icon in the top left corner for many apps (e.g. in explorer), but you can still go there and invoke the system menu. Took me a while to notice the icon was missing for some apps - while using it.
(There are probably much better examples.)
The optimum isn't obvious. Consistency is core in UI, but only deviation from consistency can lead to improvements. You just can't optimize for "most consistent" or "most creative", both will fail.
it's a cross-domain skill. How many people are programmers, designers and neuroscientists? How many CS university courses teach cognitive models and how they apply to user interfaces? How many programmers pondered muscle memory, feedback loops and cognitive load?
UI's are still designed largely by programmers and sometimes fixed by designers after the fact.
effect is hard to measure
Take the Microsoft Office Ribbon: Judging from the responses, it seems to work better for many, yet is harder by orders of magnitude for others. It was a bold step, no doubt, but was it good? Microsoft does run UI tests, and they did it for the ribbons - whether they screwed up the tests, whether office politics won over facts, or wether the backslash was just wasn't forseeable in the data, I don't know. (But I'd seriously like to)
How many shops can afford user tests? Everyone can do hallway usability, but that just ensures you don't suck.
Skimming along the line
There is low pressure for the perfect UI, there is high pressure for a good enough UI. Given the lack of common knowledge and the high cost of improvement, perfect would not be affordable. The "Apple tradeoff" involves a higher price and technical shortcomings. They are pushing the limits (good!) with bold steps (very good!), which captures a notable but not major market segment. Still they are far from perfect.
I think if you ask Henry Ford the same question about designing automobiles you would have gotten an answer that would equally apply to your question today.
And that answer is, we're still in the infancy of human computer interaction design and we don't yet have enough data to design genuinely ideal systems. And, even if we did we don't yet have the ability to manufacture such an ideal system at an affordable price point.
Much like Henry Ford could not have designed the Bugatti Veryon in his day, nor could he have built it if he could design it. Or the Prius for that matter.
No, User interfaces isn't that subjective. Ergonomical matter is a scientific topic.
Think about that :
Today, everybody uses a computer. That was not the case 30 years ago.
Today , everybody uses a glass surface to access data. That was not the case 30 years ago.
Today, you've got several devices to access your data. That was not the case 30 years ago.
Today, data is collected everywhere. That was not the case 30 years ago.
Today, you can even control your data with glasses. That was not the case 30 years ago.
There is no magic bullet. just like nature, we're talking about an evolutive and living ecosystem, in the pure darwinian way.
UI design is to make people who have less knowledge about it but can easily understand the application and use it comfortable. That is core challenge of the UI design. So it evolves just like a robot. There is no end to perfect design. As along it makes the users to use easily then it is a perfect design.
User Interface is a very subjective subject, what might be ideal (graphically pleasing, efficient) for one person or task might not be ideal for another task or even another person doing the same task.
Also, the different platforms on which GUIs are implemented is ever changing and thus needing GUIs to evolve to meet specific platform demands (touch screens, ie. lend themselves towards a completely different user interface, then a mouse based platform, or even something like an ATM)
However, there are classes and books written on the subject, so there is some level of continuity in the area that has been there for quite some time.
User Interface is a very subjective subject, what might be ideal (graphically pleasing, efficient) for one person or task might not be ideal for another task or even another person doing the same task.
Also, the different platforms on which GUIs are implemented is ever changing and thus needing GUIs to evolve to meet specific platform demands (touch screens, ie. lend themselves towards a completely different user interface, than a mouse based platform, or even something like an ATM)
However, there are classes and books written on the subject, so there is some level of continuity in the area that has been there for quite some time.
In short, TECHNOLOGY.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am a first time intern at a large corporation and I created a GUI tool that lets my coworkers visualize the log file that their product produces. The tool, known as MRI, is nearing completion and I face a conflict.
One party, (Two ambitious Indian guys that live in California) want me to adapt MRI to a new format and to display much more detailed information. The current version of MRI is built around the idiosyncrasies of the 20 year old log file format. In my opinion it is a bad idea to attempt to grow a more powerful, more universal tool out of a less powerful and idiosyncratic one (Better to start from scratch; something I probably don't have time to do).
The other party is composed of several marketing types and my father. They are drooling over the shiny new GUI that I slapped on top of their crazy old log file, and every one of them wants some feature that would help them with their day to day work.
Whom should I please? I just want to code. Which path will lead to less dumb conflicts like this?
Sounds like you are getting your first taste of the world of a manager! I'm doing exactly the same thing 10 years later, with a much bigger budget and head count. So it never really ends.
I love the answer about doing some time estimates for each requested addition, and then sitting down all parties and working on a negotiation that gets the greatest degree of satisfication. I'm betting that since you are an intern, and many of the people you mention have seniority, that they will be able sort out amongst themselves who has the biggest stake and most power in the situation. But if not, don't hesitate to act as moderator -- after all, this is your project.
Other things to think about:
Types of stake holders:
Customers - the person who controls the budget is often the most powerful of stakeholders, after all, they control your ability to do the work by controlling your funding. For an internal tool, this is probably an internal stakeholder, but it may be someone from a non-engineering group, if this tool is for a non-engineering purpose.
Users - in the long run, users often make or break a tool. They definitely determine the tool's longetivity. It's not unusual, though, for users to lack advocates. And in a big internal project, it's entirely possible that users are not the customers.
Technical Management - particularly when you are an intern and when you are working on an internal project, technical management is the group that's most important for you (as an individual) to please. They may have their own stake in the feature set, as they may be looking for a certain feature path for the product that fits a long term technological end game. Ideally, they should be on your side, and helping to figure out the best feature set.
In a big company, hopefully these roles are really well defined. Probably with an org chart. But not necessarily. And in a group that's used to working together, they may not make it really clear to a new comer exactly what the official roles are. As the guy doing the work, you're job should be to accurately and honestly tell them your best guess on what effort it will take to get the feature done. And to be open to ideas for making it cheaper/easier.
Negotiation:
The best negotiation advice I've ever gotten was "A good negotiation is one where everyone thinks they won". Sadly, the frequent outcome is that everyone feels equally screwed. The trick between every stakeholder leaving happy and every stakeholder feeling beaten down is to see the big picture and be innovative about getting everyone's needs met. In the end, no one really cares how you do it, if you can make their jobs easier, they will be happy. So finding features that serves everyone well can be the key to resolving the conflict.
Being able to do this well will really make a positive impact on your bosses. This is an extremely rare skill, and this type of finesse does get noticed.
Not having it does not mark you as a pariah, however, not many engineers enjoy negotiation. And it's never worth making every engineer be good at it. It's far better to find an engineering manager who is good at negotiating and to let them be the "speaker for the geeks", so the rest of the engineers can do their work in peace. :)
Sit the two parties down in the same room. Show them a list of the features each has asked for and how long you think each will take. Then explain that all of it is possible but all of it takes time, and ask them to come to agreement on what they would like when. Note down what is agreed and mail it to everyone afterwards so there is a record. Don't forget to pad your estimates to allow for testing and debugging time.
Alternatively, work out who the person directly responsible for managing you is, implement what they tell you (feeding back estimates of how long each thing will take) and tell anyone else who asks you to implement anything to go talk to that person to get it on your schedule; then doing the above management work becomes their problem.
Explain, if doing one of the above does not cause the matter does not resolve itself, that the Californians' features would require a refactor, and if you are going to do that you would rather hold off implementing any features for the other party until that is complete since doing the same work twice is wasteful.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
While novice software designers expect their users to behave rationally, it's far from being the case ; I've seen many times the user perception being totally disconnected from reality, or it's feedback obviously irrational.
I think we are the one who should adapt, not the other way around.
There's only one way that I know of to achieve this : listen to users, especially about what they don't like in software they use.
If there's one thing I've learned so far ; they often complain about things one wouldn't expect
What unexpected things did you learn from your users?
A few years ago, hospitals (at least French hospitals) were run using old win 3.11 software’s. Every single task was tedious; moving someone from one room to another would take 5 minutes to an expert user
A friend of mine was working on selling up-to-date software to those people. The same simple task would take 30s to a total beginner.
While most of the users were very happy with the new software, a handful were complaining, which wasn’t a surprise (there’s always a handful of users complaining). What was more unexpected was their reason: the software was damn slow. “The same simple task was instantaneous, now it takes ages to achieve. Give me my old software back”, they would say.
My friend decided to meet them, and asked them for a live demo of the slowness they were complaining about.
“Look, said the user, with my old software : I input the first name, enter, the name, enter, the admission number, enter, the old room number,[…insert 5 minutes here…] the new room number enter … and it’s done….. See… Everything is instantaneous”
“Now, look at your software. I do a drag and drop, as you taught me. And I wait, I wait… look, it’s done..I’ve waited for almost 30s….”
That’s a real world example. It really happened. I’m pretty sure that if the software had been modified to ask for useless information that it would have discarded afterwards during the 30s period, this user would have had a far better feeling with the new software
If you think about it there's no such thing as irrational user behaviour, there's just a mismatch between your expectations and theirs. The only way to close that is through dialogue. That doesn't necessarily mean going and doing usability studies, often the right dialogue is for them to read the help where the discrepancy is easily dealt with.
The only wrong thing to do is to not listen to what they are saying - or to listen and not really hear them (see the post on here about IE on the Mac - it's the height of arrogance). Of course you are going to get some people who just don't like change and will whinge about anything, but in general if a user will take the time to point out something in your software which bugs them, then you should listen. You may choose to ignore them, but if you listen right you may just as easily uncover a real gem.
I don't believe your users or customers will often innovate for you, but I strongly believe that they are the key to your software being usable, and usability leads directly to success. So to characterise them as irrational probably doesn't serve your best purposes - or theirs. Better to take them seriously to start with and filter out what you consider not to be good feedback.
Developing for a hand held unit many years back, I got contacted by a user who complained that their unit kept on turning off immediately after power on. It turned out to be a bug; the startup message ended with the line "Press any key to continue". It should have said "Press any key, except the big red key marked power, to continue".
One thing I have learned over the years is that time spent with end-users on requirements analysis prior to going anywhere near design is hugely important, as is understanding the culture and educational background of the users. Designing computer systems that look and work like existing manual systems is a good start, as is understanding the workflow. Another hand held van sales delivery system I was involved was specced to look for on-screen customer signatures on delivery, and this was necessary to complete the transaction. It turned out that most of the deliveries actually occurred early morning before anyone was there to sign for them, so the perceived workflow didn't gel with reality at all. The client IT staff didn't actually know this, nor did the business analyst. If you design systems without input from actual end users you do so at your peril.
In my previous job, I was designing a huge trading software for a huge bank.
The software would typically take around 5 minutes to launch.
Of course, the users were complaining a lot about the startup time, especially when the software was crashing during the day, which was happening from time to time.
From the day we added a detailed progress bar (progressing quite regularly, with an indicator of the number of remaining items), the complaints almost stopped.
Typical users would say "I used to take ages to load, but now, it's quite fast"
The next step for us was to display the user interface before the data is loaded instead of after (which makes more sense for an IT point of view)
This time, the modification resulted in a slight performance drop (from 5mn to 5"30), due to the cost of impacting the UI during the loading time.
From a user perspective, the software was much faster this way !!
I was once working on a cms for images. The admin would basically browse though pages of user-made images, and check the ones he wanted to publish. I wrote a nice manual on how the system works, but since everybody knows people don't read manuals, i put some guides on the page telling what to do (in this case, something like: "Check the box for every image you want to publish").
It wasn't long before some guy came pull my sleeve: "There's a bug in your program. It actually tosses the images i don't select, and not the ones i select".
The problem was solved by asking him to read aloud the text on the page.
While novice software designers expect
their users to behave rationally, it's
far from being the case ; I've seen
many times the user perception being
totally disconnected from reality, or
it's feedback obviously irrational.
I think we are the one who should
adapt, not the other way around.
Are you saying we should adapt to irrational behaviour? Software development is already irrational enough (dynamic languages, test driven development, ...), and you expect us to unilaterally bend over backwards to accommodate some distorted expectations?
A few years ago, I designed a small application which was mainly aimed at helping users to input complex data in a database.
Their old method was to input everything into an excel sheet (without validation of any kind), and then to use a vba macro.
My new program added validation, and was able to auto-fill almost half of the data they previously manually entered.
I expected to be a success... which it wasn't ... at all:)
"It's just impossible to use", they said...
I had tested it, asked my mother to test it... my software was fine...
In fact, those users were so used with inputting repetitive data that they used only the keyboard, not the mouse. And of course, I hadn't thought of managing the tab order correctly, so the cursor was just jumping all over the place each and every time they hit "tab", thus the "impossible to use" comment !
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
A couple of weeks ago, my piano teacher and I were bouncing ideas off of each other concerning meta-composing music software. The idea was this:
There is a system taking midi input from a bunch of instruments, and pushes output to the speakers and lights. The software running on this system analyzes the midi data it's getting, and determines which sounds to use, based on triggers set up by the composer (when I play an F7 chord 3 times within 2 seconds, switch from the harpsichord sound to the piano sound), pedals, or actual real-time analysis of the music. It would control the lights based on the performance and sounds of the instruments in a similar fashion - the musician would only have to vaguely specify what they wanted - and real time analysis of their playing would do the rest. On the fly procedurally generated music could play along with the musician as well. Essentially, the software would play along with the performer, with one guiding the other. I imagine that it would take some practice to get use to such a system, but that it could have quite incredible results.
I'm a big fan of improv jazz. One characteristic of improv that is lacking from other art forms is the temporalness of it. A painting can be appreciated 10 or 1000 years after it has been painted, but music (especially extemporized music) is about the performance as it is the creation. I think that the software that I described would add a great deal to the performance, as with it, as playing the exact same piece would result in a completely different show each time.
So, now for the questions.
Am I crazy?
Does software to do any or all of this exist yet? I've done some research and haven't turned up anything. The key to this system is that it is running during the performance.
Were I to write something like this, would a scripting language such as Python be fast enough to do the computations that I need? Presumably it'd be running on a fairly quick system, and could take advantage of the 2^n core processors Intel keeps releasing.
Can any of you share your experience and advice concerning interfacing with musical instruments and lights and the like?
Have any ideas or suggestions? Cold and harsh criticism?
Thanks for your time in reading this, and for any and all advice!
(And sorry for the joke in the tags, I couldn't resist.)
People have used Max MSP to do this kind of thing with Midi and creating video accompaniment, or just Midi accompaniment. It's a completely domain specific app, that probably was inspired by small talk or something, which barely any real programmer could love, but musician-programmers do.
Despite the text on the site I just linked to, and the fact that 'everyone' uses the commercial version, it wasn't always a commercial product. Ircam eventually released it's own lineage. It's called jMax. PureData, mentioned in another post here is another rewrite of that lineage.
There's also CSound; which wasn't meant to be real-time, but is likely able to be pretty real-time now that you have a decent computer compared to where CSound started.
Some people have also hacked Macromedia Director extensions to allow for doing midi stuff in Lingo... That's very outdated, and hence some of them have moved to more modern Adobe environments.
Look at PureData. It can do extensive midi analysis and folks use it for performance.
Indeed, here's a video that flashes past a puredata screen. It shows someone interacting with a rather complex instrument using PD.
Also, look at CSounds.
I have used PyAudio quite extensively for dealing with raw audio inputs, and found it to be very unpythonic, acting much more like a very thin wrapper over C code. However, if you're dealing with midi, rather then raw waveforms, then your tasks are quite a bit simpler, and python should be quite fast enough, unless you play at 10000 beats per minute :)
Some of the issues: detecting simultaneity, harmonic (formal - i.e., chord structure) analysis.
This is also an 80/20 problem that if you restrict the chord progressions allowed, then it becomes quite a bit simpler. After all, what does "playing along" mean, anyway, right?
(Also, at electronic music conf's I've been too, there are lots of people doing various real-time accompaniment experiments based on input sound and movement). Good luck!
You might also look at ChucK and SuperCollider, the two most popular 'real' realtime music programming languages.
Also, you might be surprised at how much you can accomplish with Ableton Live racks.
(and it's CSound. No 's' at the end)
see also:
Keykit
Arx
I have no idea if the second one is actually real or worth looking at. Keykit, however, is.
You might contact Gary Lee Nelson in the TIMARA department at Oberlin. 20 years ago I did a project that auto-generated the rhythm section for 12 bar blues and I recall him describing a tool that he knew of that did essentially what you're describing.
You might be interested in GenJam
The answer to your question is no - you're not crazy.
Similar systems exist, but your description is pretty
vague to begin with so it's not much of a spec to judge against.
I suggest you start writing a prototype and see how it does.
Something extremely small and simple.
Existing systems be damned.
I'm using c++ on win32 api (no mfc).
Started writing my sequencer back on the Amiga500.
It doesn't do lights, but there's plenty to do in just music.
Good luck to you.
It's an EXTREMELY fun project.
I'd say -don't- pattern your project on how other projects work.
Because, if you ask me, they don't work so great ;)
And the fun is being able to do something different.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Do you use a formal event to get people talking in your IT department? Like a monthly meetup in a social place, a internal wiki/chat space or just a regular "information market" with some presentations about technology or projects made by your staff for your staff? Do you invite Sales people to participate or is it a closed event for programmers only?
How do you get people to participate in these events? Do you allow them to spent work time on knowledge transfer? Or do you understand it as an integral part of the work time?
I wonder how to monitor the progress of knowledge transfer itself. How do you spot critical one-person spots of failure in your projects? There are several methods to avoid it, like staff swapping or the "fifo" attempt on bug fixing.
Note: Ok, this is a very very noisy question and I hope to fix it after a few comments. Sorry for the mixup.
edit: My personal experience is that there is a very high barrier for people to start contributing. It looks like they won't put in the (minimal) extra time to edit our wiki, or spend the hour in the afternoon to talk about technology topics with the developing staff. It's like people don't like our wiki, our document management system or the meeting. Maybe it's because it's all free-to-use and not forced by the management. But I don't like to force people into it - but is it the right way?
One example: Our wiki holds pages about projects, telling who worked on it to get a first contact in case of questions. But nobody besides a colleague and me is creating this pages...
Knowledge Transfer and Knowledge Management have one drawback. They seem to cost an aweful lot: if everybody knows what I know, am I still needed? All the time I use to bring others up to speed, what do I gain from it?
The best way to go about this is to be an example. Share your knowledge; in a wiki, blog about it, talk about it, make it easily accessible, and talk about the benefits you have from that: less people come to interupt and ask you stuff, as they can get an answer easily without even getting up. And show them that you are still there.
This with all the other things mentioned will actually win out. One more thing: one of my employers kept on paying me 1/3 of my salary for another year after I left (on my own initiative), just to keep my knowledge-base up and running. Did he have to? No, it was his property anyway. But it motivated people still working for him to share their knowledge.
I think all of the above. But you're forgetting the most important way.
The most efficient way to transfer knowledge is to have people work together. You might think about doing 1 on 1 code reviews or even pair programming and make knowledge transfer an intergral part of the work.
I think it depends on the knowledge you are trying to transfer. I've found the following:
Technical Knowledge: "How to guide" with screenshots and a short demo - similar to the way you will see new features at a conference. The added benefit of this is what you have got is documented for when you leave the company.
Problem solving: informal discussions, short internal projects, lessons learned and an internal FAQ system which EVERYONE is responsible for updating.
Soft Skills (people skills): social meetings/outings/informal events etc.
Measuring that is going to be difficult though, as no matter how you transfer your knowledge there will always be varying degrees of uptake, after all, just because I do something one way doesnt mean its correct. Another developer/designer/manager may have a different way of doing the same thing with the same end result.
Mauro
At my workplace we use a wiki. The workplace is small enough (~20 people) so that you can always ask the person who was most involved in a particular project, however it is expected that you have searched on the wiki before you ask "the expert". If you cannot find your answer in the wiki, then you should add it after you have discussed it with your co-worker.
One word: Lunch
You should encourage people about things that you want them to do. You should "feed the animal". Look at stackoverflow; what do you think about badges? Why do you think this wonderful things exist? Thanks to ego, there is nothing you can't get it done. Give them badges, real badges, wearable badges. They will wear with happiness, they will do with happiness.
Btw, yes, I am a boss :)
Although i am still a student, when i did work experience 12 months ago, the all IT departments from within the corporation (I was 'working' for large corporation which own several mines in the area) would have a daily telephone conference, where each employee would say what they had been doing etc, and then talk about something new they had discovered and any other interesting tid-bits.
Couple ways I have seen so far:
Wiki is suitable for internal knowledge, for example environment, project specific topics.
Open doors policy
Encourage asking questions.
Voluntary presentations. Find out who have special knowledge and make it easy and attractive to set up a short presentation about it.
Project post mortem documents. A wrap up meeting moderated by someone outside project team held after project is finished or terminated.
Compulsory presentations.
Project presentation when they go live. Technologies used etc.
In case someone is sent to conference, he should have a presentation about new technology he saw.