I try to use Sound(x) command with different "x" values, but it only shows a beep, same sound.
program Sounds;
uses
crt;
begin
Sound(1000);
Sound(500);
Delay(1000);
Sound(300);
Sound(150);
Delay(1000);
NoSound;
end.
What's wrong with this code?
Sound does not work on Windows anymore.
I made a patch that works on some systems, but they did not really care about it.
--
Sound makes a sound forever / till NoSound is called. If you only want to make a sound
for a certain duration you can use Beep(freq, duration) from the windows unit
Related
Well, I'm using Code::Blocks as the IDE, and Win AVR as the compiler.
F_CPU is selected as 8000000UL.
I'm writing code for Atmega32.
But when I run my written code (*.hex file) in Proteus design suite (ISIS) the _delay_ms(1000) doesn't give a delay for 1sec. I don't know if it is write or wrong, I've selected CKSEL fuses to be (0100) Int.RC 8MHz in edit component.
What's wrong?
please....
Have you tried setting the compiler optimization to something other than -O0? From the avr-libc docs regarding delay* functions.
In order for these functions to work as intended, compiler
optimizations must be enabled, and the delay time must be an
expression that is a known constant at compile-time.
Using PWM for servo control I figured out that even with this setting of Internal 8Mhz, Proteus are actually simulated with a clock of 1Mhz. If you change F_CPU to 1000000UL you will see that delay will work just fine.
Its just proteus simulation lags. On the real device your delay function will work properly. In order to simulate time delays the good choice is using avr studio program.
I'm planning on making a clock. An actual clock, not something for Windows. However, I would like to be able to write most of the code now. I'll be using a PIC16F628A to drive the clock, and it has a timer I can access (actually, it has 3, in addition to the clock it has built in). Windows, however, does not appear to have this function. Which makes making a clock a bit hard, since I need to know how long it's been so I can update the current time. So I need to know how I can get a pulse (1Hz, 1KHz, doesn't really matter as long as I know how fast it is) in Windows.
There are many timer objects available in Windows. Probably the easiest to use for your purposes would be the Multimedia Timer, but that's been deprecated. It would still work, but Microsoft recommends using one of the new timer types.
I'd recommend using a threadpool timer if you know your application will be running under Windows Vista, Server 2008, or later. If you have to support Windows XP, use a Timer Queue timer.
There's a lot to those APIs, but general use is pretty simple. I showed how to use them (in C#) in my article Using the Windows Timer Queue API. The code is mostly API calls, so I figure you won't have trouble understanding and converting it.
The LARGE_INTEGER is just an 8-byte block of memory that's split into a high part and a low part. In assembly, you can define it as:
MyLargeInt equ $
MyLargeIntLow dd 0
MyLargeIntHigh dd 0
If you're looking to learn ASM, just do a Google search for [x86 assembly language tutorial]. That'll get you a whole lot of good information.
You could use a waitable timer object. Since Windows is not a real-time OS, you'll need to make sure you set the period long enough that you won't miss pulses. A tenth of a second should be safe most of the time.
Additional:
The const LARGE_INTEGER you need to pass to SetWaitableTimer is easy to implement in NASM, it's just an eight byte constant:
period: dq 100 ; 100ms = ten times a second
Pass the address of period as the second argument to SetWaitableTimer.
This guys says yes:
http://web.tiscalinet.it/giordy/midi-tech/lowmidi.htm
Same with a really old book from 1998 (Maximum MIDI).
MSDN doesn't mention it.
I'm not getting any sound.
I fill a char buffer with status|note|velocity|status|note|velocity...
Set lpData, dwBufferLength, and dwFlags of a MIDIHDR struct
call midiOutPrepareHeader (MMSYSERR_NOERROR)
call midiOutLongMsg (MMSYSERR_NOERROR)
Still no sound! Spamming midiOutShortMsg is working but will that work for slower machines? Did they change the functionality?
Thanks.
I'm an idiot! I figured it out: Microsoft GS Wavetable Synth does NOT support sending multiple short messages in midiOutLongMsg. The MIDI Mapper DOES!
midiOutShortMsg should be plenty fast, even on slow machines. MIDI interfaces themselves (hardware that is, but some software will limit themselves) run at 31,250 baud. This of course is ignoring any slow code you may have wrapped around where you call midiOutShortMsg.
Anyway, technically you should also be able to get away with one status byte, if the following notes use the same status byte. So, if you want to do note on/off (using velocity 0 for off) and those notes are on the same channel, you could do this:
status|note|velocity|note|velocity|note|velocity|note|velocity
This is called running status.
I'm creating a game engine using wxWidgets and OpenGL. I'm trying to set up a timer so the game can be updated regularly. I don't want to use wxTimer, because it's probably not accurate enough for what I need. I'm using a while (true) and a wxStopWatch:
while (true) {
stopWatch.Start();
<handle events> // I need a function for this
game->OnUpdate();
game->Refresh();
if (stopWatch.Time() < 1000 / 60)
wxMilliSleep(1000 / 60 - stopWatch.Time());
}
What I need is a function that will handle all the wxWidgets events, because right now my app just freezes.
UPDATE: It doesn't. It's slightly jerky on Windows, and when tested on a Mac, it was extremely jerky. Apparently EVT_IDLE doesn't get called consistently on Windows, and even less on a Mac.
UPDATE2: It actually mostly does. It's fine on a Mac; I misunderstood my Mac tester's reply.
Instead of using a while (true) loop, I'm using EVT_IDLE, and it works perfectly.
UPDATE: It doesn't. It's slightly jerky on Windows, and when tested on a Mac, it was extremely jerky. Apparently EVT_IDLE doesn't get called consistently on Windows, and even less on a Mac.
UPDATE2: It actually mostly does. It's fine on a Mac; I misunderstood my Mac tester's reply.
"ave you requested idle events to be generated at the maximum rate? You have to call RequestMore() on the event, if you don't you will get the next idle event only after some other event has been processed. Note that constant idle processing will cause 100% CPU load on one core."
This works, I have the following code in a graphical window:-
BEGIN_EVENT_TABLE(MyCanvas, wxScrolledWindow)
EVT_PAINT (MyCanvas::OnPaint)
EVT_IDLE(MyCanvas::OnIdle)
EVT_MOTION (MyCanvas::OnMouseMove)
END_EVENT_TABLE()
The canvas needs to be updated when my_canvas->Refresh(bClearBackground) is called and not otherwise. To do this I needed to make a modification as the program was eating up half of the cpu time (or 100% of 1 cpu on a duel core).
void MyCanvas::OnIdle(wxIdleEvent &event)
{
wxPaintEvent unused;
OnPaint(unused);
event.RequestMore(false);
}
Setting the parameter of RequestMore() to false makes the app only ask for more when its needed, i.e. only when Refresh() has been called.
Have you requested idle events to be generated at the maximum rate? You have to call RequestMore() on the event, if you don't you will get the next idle event only after some other event has been processed. Note that constant idle processing will cause 100% CPU load on one core.
Even if you request more idle events you can't be sure how long it will take for the next one to arrive. Therefore to get smooth animation you will need to calculate the elapsed time since the last event, and update the display accordingly.
Is there a way to programmatically detect whether the microphone is on on Windows?
No, microphones don't tell you whether they're ‘on’ or that a particular sound channel is connected to a microphone device. The best you can do is to read audio data from the input channel you suspect to be a microphone (eg. the Windows default input device/channel), and see if there's any signal on it.
To do that you'd have to remove any DC offset and look for any signal above a reasonable noise floor. (Be generous: many cheap audio input devices are quite noisy even when there is no signal coming in. A mid-band filter/FFT would also be useful to detect only signals in the mid-range of a voice and not low-frequency hum and transient clicks.)
This is not tested in any way, but I would try to read some samples and see if there is any variation. If the mike is on then you should get different values from the ambient sounds. If the mike is off you should get a 0. Again this is just how I imagine things should work - I don't know if they actually work that way.
Due to a happy accident, I may have discovered that yes there is a way to detect the presence of a connected microphone.
If your windows "recording devices" shows "no microphone", then this approach (using the Microsoft Speech API) will work and confirm you have no mic. If windows however thinks you have a mic, this won't disagree.
#include <sapi.h>
#include <sapiddk.h>
#include <sphelper.h>
CComPtr<ISpRecognizer> m_cpEngine;
m_cpEngine.CoCreateInstance(CLSID_SpInprocRecognizer);
CComPtr<ISpObjectToken> pAudioToken;
HRESULT hr = SpGetDefaultTokenFromCategoryId(SPCAT_AUDIOIN, &pAudioToken);
if (FAILED(hr)) ::OutputDebugString("no input, aka microphone, detected");
more specifically, hr will return this result:
SPERR_NOT_FOUND 0x8004503a -2147200966
The requested data item (data key, value, etc.) was not found.