I need to play a short sound repeatedly (simulating metronome) while recording sound.
What I did for the metronome was basically setting a DispatcherTimer with specific Interval, and every tick firing a SoundEffect. For the recorder I call the XNA's FrameworkDispatcher.Update method every 33 milisec (also using DispatcherTimer for that).
I run the metronome, it works fine, and then when I begin to record - there's a short break in playing sound (hard to say if it delays the Interval or just mutes the sound), and after a while (when already recording), the metronome continues to tick, but with more 'flatten' sound.
Is this a hardware limitation, or am I doing something wrong?
I thinkt this is connected with hardware. I was making an app to modify sound when it is captured. When I was using headset (with mic) connected to device there was big echo on playback. When I was using only headphones (and device mic) everything was ok. It was tested on HTC and Nokia - same results but HTC was little bit better :)
Related
I am making a drawing on NSView using a timer that is set to update every .02 seconds. On update some physical simulation makes a step, and then Canvas!.needsDisplay = true. It works when app is in foreground (usually), but when some lags happen, simulation progresses forward despite the fact that view hasn't reflected it yet. How do I pause the timer during these times to make simulation happen only when NSView can show it? I do not want to call step_over from inside drawRect, cause it seems like a bad idea, because then it would be harder to stop the simulation.
Generally this kind of update should be done the other way around, by letting the display ask you for frames as it can display them. This is done with a CADisplayLink CVDisplayLink (this is Mac; CADisplayLink is iOS). Configure it with a method you want to be called when a frame can be drawn.
Generally you do want your simulation to keep moving forward, even if it means dropping frames occasionally. For that, you check the timestamp and use that to work out what time to use for your new frame. But if you only want to move forward when the display can show it, then just update once per call.
Note that generating at 50fps is often going to mismatch the system that's trying to draw at 60fps, so you're going to wind up missing frames occasionally. That's one of several reasons not to try to push drawing with a timer.
See also Alternative of CADisplayLink for Mac OS X. Note that trying to draw at 50fps with Core Graphics usually isn't going to give good results in any case. The right tool here in OS X is Core Animation (or SpriteKit for games on 10.10, or OpenGL for more advanced high-speed rendering). You can do very basic animations with an NSTimer (and we did for years before Core Animation came along), but it's not really a tool for complex drawing.
i'm new to windows phone 8 and need your help to capture screen activities in a video. I've to make a video of the activities performing on screen?
one solution to this that strike in me is to capture the screen in image form by dispatching a timer at a instance of time but this is not a right way to do as i've to make a video of screen activities? suggest your opinion how to handle this problem.
There's no built in way of doing what you want.
You will need two things:
Do a dispatch timer as you describe
Find code that will encode these frames into a movie. That's not an API that the phone supports - you will need to find existing code and use it. I am not aware of such code existing, but I have only looked for it once or twice and not very hard. You could, potentially, create an MJPG which is a fairly simple video format, but even that's not trivial and the ending file size can be prohibitive.
I am almost done developing my first game for the Windows Phone 7 OS. Unfortunately, I've ran into a rather nasty problem. A simple task of just playing background music and a sound effect at the same time creates a serious distorsion in the sound effect play and the overall result is awful. This is when the sound comes out of the phone's speaker.
If, however, I use the headphones, all the problems go away (i.e., no distorsions whatsoever). I am also using the same code on an HTC HD7 and I don't see the problem either on the speaker or on the headphones.
The sound effect is an MP3 # 48kbps and the music is also an MP3 # 96 kbps.
All I do is the following:
In the LoadContent:
backgroundMusic = Content.Load<Song>("Music");
soundEffect = Content.Load<SoundEffect>("SoundEffect");
soundInst = soundEffect.CreateInstance(); / I am using a SoundEffectInstance to play the sound effect
MediaPlayer.Play(backgroundMusic);
MediaPlayer.IsRepeating = true;
Laster on, I just issue a soundInst.Play().
If, however, I do not play any music, the sound effect plays just fine.
Again, this only seems to happen on my Nokia Lumia 800 as it seems ok on the HTC HD7. However, the funny thing is that in the majority of the games that I play on the Lumia 800, I am not noticing this music/sound effect problem (I've only noticed it in one other game).
I have tried to play w/ the volume as well but it doesn't help. Even if the music has a volume of 0, the sound effect is not played correctly.
In conclusion, if and only if I stop the music from playing completly, regardless of the volume, the sound effect plays correctly.
Any ideas?
Thanks in advance!
We are developing a music player app for Lion OSX(10.7), which applies different audio effects to selected music file.
We have used Audio unit and AUGraph APi's to achieve this.
However after connecting all the audio unit node , when we call AUGraphStart(mGraph) graph takes around 1 sec to invoke first I/o callback.
Because of this there is slight delay in the beginning of the playback.
How can we avoid this delay?Could any one provide any imputs to help us solve this issue?
One solution is to start the audio graph running before displaying any UI that the user could use to start playback. Since the audio units will then be running, you could fill any audio output buffers with silence before the appropriate UI event. If the buffers are small/short, the latency from any UI event till an output buffer is filled may be small enough to be below normal human perception.
I have a custom UISlider and use the currentPlaybackTime to change values of an MPMoviePlayerController object.
The problem is when i scrub at a fast rate using the slider, it doesn't respond as fast as i would like..
Is there any better way to have a fast interactive scrubber for ipad? targeting from OS 3.2
Well there are two issues, only one you can control directly.
Multimedia-content is commonly compressed using some kind of delta-compression, hence quick and exact seeking is not a trivial task to cope with. As that is common and since you can not directly change that, you will have to live with it.
the only way to increase responsiveness for seeking on the content-side (when encoding) is reducing the gop-size - that is, less p-frames between the i-frames.
when using a slider or a similar control, you could, instead of directly connecting the current playback position with it, handle any manual changes in an indirect fashion. You could run a timer based job that, whenever the slider/scrubber has been moved, tries to adjust the playback position towards that new value. Once the player is seeking, prevent the scrubber from getting feedback from the current playback location but allow it once the player is in playing state again. That way the user does not directly experience the clunky seek feedback.