I have a very specific problem: I want to write my own DMX-Software to control our DMX-fixtures. Does anyone know a interface to use? It would be great if there would be any Framework for using it, so that I only have to sent the channel and the value to the interface.
I noticed your question was for Mac, but I wrote a Windows specific C++ program, which could probably be easily modified. It's adapted from the C# example on Enttec's OpenUSB website. See:
https://github.com/chloelle/DMX_CPP
There's some really good information & code samples (including a working class that I wrote) here: Lighting USB OpenDMX FTD2XX DMXking
Ultimately, you end up setting byte values (between 0 and 255[FF] (brightest) in a byte array.
It's fairly trivial to implement simple effects such as fades or chases.
You would need to use a USB controller to convert your program's instructions to the actual hardware.
I suggest using a simple iphone application talking to a webservice which then interacts with the hardware.
Code samples above are in c# though will show you how to interact with a DMX controller
Related
What do I need to know to stream a sound of a given frequency (that is, generated programatically at runtime, not a file) to the loudspeakers of the system on MS Windows?
Please be as specific as you can, I have an idea of winAPI, but I'm more of a linux programmer, so function names are welcome.
The answer with the most concrete steps and functions to call will be accepted.
The first function you need to use is "waveOutOpen". You can follow examples in MSDN that use this function or google for some example in the Internet that uses this function. Take a look, for example, at
http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=4422&lngWId=3
Is it possible to use the NSSpeechRecognizer with an pre-recorded audio file instead of direct microphone input?
Or is there any other speech-to-text framework for Objective-C/Cocoa available?
Added:
Rather than using voice at the machine that is running the application external devices (e.g. iPhone) could be used for sending just an recorded audio stream to that desktop application. The desktop Cocoa app then would process and do whatever it's supposed to do using the assigned commands.
Thanks.
I don't see any obvious way to switch the input programmatically, though the "Speech" companion guide's first paragraph in the "Recognizing Speech" section seems to imply other inputs can be used. I think this is meant to be set via System Preferences, though. I'm guessing it uses the primary audio input device selected there.
I suspect, though, you're looking for open-ended speech recognition, which NSSpeechRecognizer is not. If you're looking to transform any pre-recorded audio into text (ie, make a transcript of a recording), you're completely out of luck with NSSpeechRecognizer, as you must give it an array of "commands" to listen for.
Theoretically, you could feed it the whole dictionary, but I don't think that would work since you usually have to give it clear, distinct commands. Its performance would suffer, I would guess, if you gave it a bunch of stuff to analyze for (in real time).
Your best bet is to look at third-party open source solutions. There are a few generalized packages out there (none specifically for Cocoa/Objective-C), but this poses another question: What kind of recognition are you looking for? The two main forms of speech recognition ('trained' is more accurate but less flexible for different voices and the recording environment, whereas 'open' is generally much less accurate).
It'd probably be best if you stated exactly what you're trying to accomplish.
im looking for suggestion in which GUI tool is most appropriate for me to use in implementing my study. im using java language. i would like the graphics to simulate a house in which graphical changes apply without user inputs from mouse or keyboards. my user input is in the form of sms. thanks in advance guys. im hoping to animate it or simulate a smart home through the conditions i had set in my program. thnaks!
Your questions is very underspecified. I will assume that you are at the early stages of producing a hand-rolled home automation programs, you probably need:
an environment to let you test the core logic of the system (i.e. "If the system is in state X and I issue command Y, what does it actually do, and will I lose the contents of my freezer?")
an environment to let you test the SMS communications module
you may need a demo mode to show prospecitve customers what it does (this is my best guess at what is being requested here)
Now (3) could fill in for (1), but is a lot more programming effort, so from the start you probably want a simple text interface to do (1).
In general, you almost certainly want a modular system: a core logic system supported by at least two input models (SMS and keyboard), three output models (text debug, graphical demo, and control-line/wireless signals for the actual hardware), and various ancillary stuff (configuration reading, saved state handling). Come to think of it, since you probably need a way to probe the current state of the system, you should make the saved state and condition probe code share a single framework as well.
I've been reading about TDD, and would like to use it for my next project, but I'm not sure how to structure my classes with this new paradigm. The language I'd like to use is Java, although the problem is not really language-specific.
The Project
I have a few pieces of hardware that come with a ASCII-over-RS232 interface. I can issue simple commands, and get simple responses, and control them as if from their front panels. Each one has a slightly different syntax and very different sets of commands. My goal is to create an abstraction/interface so I can control them all through a GUI and/or remote procedure calls.
The Problem
I believe the first step is to create an abstract class (I'm bad at names, how about 'Communicator'?) to implement all the stuff like Serial I/O, and then create a subclass for each device. I'm sure it will be a little more complicated than that, but that's the core of the application in my mind.
Now, for unit tests, I don't think I really need the actual hardware or a serial connection. What I'd like to do is hand my Communicators an InputStream and OutputStream (or Reader and Writer) that could be from a serial port, file, stdin/stdout, piped from a test function, whatever. So, would I just have the Communicator constructor take those as inputs? If so, it would be easy to put the responsibility of setting it all up on the testing framework, but for the real thing, who makes the actual connection? A separate constructor? The function calling the constructor again? A separate class who's job it is to 'connect' the Communicator to the correct I/O streams?
Edit
I was about to rewrite the problem section in order to get answers to the question I thought I was asking, but I think I figured it out. I had (correctly?) identified two different functional areas.
1) Dealing with the serial port
2) Communicating with the device (understanding its output & generating commands)
A few months ago, I would have combined it all into one class. My first idea towards breaking away from this was to pass just the IO streams to the class that understands the device, and I couldn't figure out who would then be responsible for creating the streams.
Having done more research on inversion of control, I think I have an answer. Have a separate interface and class that solve problem #1 and pass it to the constructor of the class(es?) that solve problem #2. That way, it's easy to test both separately. #1 by connecting to the actual hardware and allowing the test framework to do different things. #2 can be tested by being given a mock of #1.
Does this sound reasonable? Do I need to share more information?
With TDD, you should let your design emerge, start small, with baby steps and grow your classes test by test, little by little.
CLARIFIED: Start with a concrete class, to send one command, unit test it with a mock or a stub. When it will work enough (perhaps not with all options), test it against your real device, once, to validate your mock/stub/simulator.
Once the class for the first command is operational, start implementing a second command, the same way: first again your mock/stub, then once against the device for validation. Now if you're seeing similarities between your two classes, you can refactor to your abstract class based design - or to something different.
Sorry for being a little Linux centric ..
My favorite way of simulating gadgets is to write character device drivers that simulate their behavior. This also gives you fun abilities, like providing an ioctl() interface that makes the simulated device behave abnormally.
At that point .. from testing to real world, it only matters which device(s) you actually open, read and write.
It should not be too hard to mimic the behavior of your gadgets .. it sounds like they take very basic instructions and return very basic responses. Again, a simple ioctl() could tell the simulated device that its time to misbehave, so you can ensure that your code is handling such events adequately. For instance, fail intentionally on every n'th instruction, where n is randomly selected upon the call to ioctl().
After seeing your edits I think you are heading in exactly the right direction. TDD tends to drive you towards a design composed of small classes with a well-defined responsibility. I would also echo tinkertim's advice - a device simulator which you can control and "provoke" into behaving in different ways is invaluable for testing.
I'm looking for an OSX (or Linux?) application that can recieve data from a webcam/video-input and let you do some image processing on the pixels in something similar to c or python or perl, not that bothered about the processing language.
I was considering throwing one together but figured I'd try and find one that exists already first before I start re-inventing the wheel.
Wanting to do some experiments with object detection and reading of dials and numbers.
If you're willing to do a little coding, you want to take a look at QTKit, the QuickTime framework for Cocoa. QTKit will let you easity set up an input source from the webcam (intro here). You can also apply Core Image filters to the stream (demo code here). If you want to use OpenGL to render or apply filters to the movie, check out Core Video (examples here).
Using theMyMovieFilter demo should get you up and running very quickly.
Found a cross platform tool called 'Processing', actually ran the windows version to avoid further complications getting the webcams to work.
Had to install quick time, and something called gVid to get it to work but after the initial hurdle coding seems like C; (I think it gets "compiled" into Java), and it runs quite fast; even scanning pixels from the webcam in real time.
Still to get it working on OSX.
Depending on what processing you want to do (i.e. if it's a filter that's available in Apple's Core Image filter library), the built-in Photo Booth app may be all you need. There's a comercial set of add-on filters available from the Apple store as well (http://www.apple.com/downloads/macosx/imaging_3d/composerfxeffectsforphotobooth.html).