I am porting a game from iPad to Mac.
Every time I start the game, certain set of sounds when they are being played, have an irritating noise at the end of playback, much like a short duration of heavy static noise.
Also the sounds which produce the noise is not the same on every execution. Each time a different set of sounds have the noise at the end.
Is there any OpenAL settings regarding to this situation to fix it.
Solutions tried:-
Tried to convert the mp3 files into some higher and lower bitrate format and tried to playback. Still noise persists.
It sounds like (get it?) you're passing in a buffer that is larger than your data, and the noise at the end is the result of attempting to interpret those bytes as sound.
Related
I have this 5 minute long 24fps video, and I need to extend it's length by a factor of some integer number, something like 12, though preferably 24. I need to keep the frame rate the same, which I have achieved through moviepy, but the bit rate has changed. The whole point of this is to create a video that is equally as intensive on an APU, but is longer. I don't understand the implications of higher bitrates, data nor total, but I need to. Is there a way to concatenate a video with itself and maintain those values in the 'details' tab of properties. keep in mind I'm doing low power measurements.
I tried Microsoft Clipchamp, but that gave me 30fps. I looked for other video editors that are free but none gave 24fps. I tried moviepy which gave 24fps but much lower bitrates.
The animation in my 2D game is 24FPS. Is there any good reason not to set the game's target frame rate to 24FPS? Wouldn't that increase performance consistency and increase battery life on mobile? What would I be giving up?
You write nothing about the kind of game, but I will try to answer anyway.
Setting 24 FPS would indeed increase performance consistency and battery life.
The downside is, besides getting laggy visuals, an increased input lag. That will not only effect the 3D controls but also every UI-Button. Your game will feel a bit more laggy than other games, a very subtile feeling that will sum up after a while.
You could get away with 24, depending on the nature of your game, you should test it with different people. Some are more sensitive to that issue than others.
If you set up the animations to have their correct framerate, Unity will interpolate the animation to the games framerate. So there is no need to have the same values on animations and the game itself.
How would I take a song input and output the same song without certain frequencies?
Based on my research so far, the song should be broken down into chucks, FFT it, reduce the target frequencies, iFFT it, and stitch the chunks back together. However, I am unsure if this is the right approach to take, and if so, how I would convert from the audio to FFT input (what seems to be a vector matrix), how to reduce the target frequencies, and how to convert back from the FFT output to audio, and how to restitch.
For background, my grandpa loves music. However, recently he cannot listen to it, as he has become hypersensitive to certain frequencies within songs. I'm a high school student who has some background coding, and am just getting into algorithmic work and thus have very little experience using these algorithms. Please excuse me if these are basic questions; any pointers would be helpful.
EDIT: So far, I've understood the basic premise of fft (through basic 3blue1brown yt videos and the like) and that it is available through scipy/numpi, and figured out how to convert from youtube to 0.25 second chunks in a wav format.
Your approach is right.
Concerning subquestions:
from the audio to FFT input - assign audio samples to real part of complex signal, imaginary part is zero
how to reduce the target frequencies - multiply FFT results near needed frequency by smooth (to diminish artifacts) function that becomes zero at that freqency and reaches 1.0 some samples apart
how to convert back - just make inverse FFT (don't forget about scale multiplier like 1/N) and copy real part into audio channel
Also consider using of simpler digital filtering - band-stop or notch filter.
Arbitrary found example1 example2
(calculation of parameters perhaps requires some understanding of DSP)
I'm looping this sound file: http://engy.us/misc/ThrusterLoop.wav . It plays perfectly in Windows on any number of players, but when I load it in as a SoundEffect in XNA it has these annoying clicks at the start and end of playback.
If I loop it for a while it doesn't do any of these annoying clicks in the middle. If I play it as a standalone sound, it will still click at the start. It's doing this clicking both in the emulator and on my physical device.
Why is it doing this? Is there something wrong with my sound file? It's a 16-bit Stereo 44.1 kHz PCM WAV file, which I assumed was pretty standard.
(edit2) I captured the sound produced through playback through XNA and compared it with the original waveform. Take a look:
http://engy.us/pics/Waveform_Original.png
http://engy.us/pics/Waveform_EmulatorXNA.png
Something is pretty screwed up with that playback! The two large amplitude changes must have been the clicks I heard. It seems to scramble up the first bit somewhat. Putting silence at the start probably helped some people because scrambled up silence doesn't produce any clicks.
Use a program like Audacity to take a look at the waveform of your sound. You can reduce or eliminate 'clicking' by properly lining up the waveform so that it starts and ends at the 0db mark (center line). In Audacity you can do this by using the Fade In and Fade Out at the front and back respectively, though this will cause the volume of your sound to 'pulse'. To get around that, zoom in as much as possible and only use Fade In / Out on the smallest possible selectable area at the front and back.
One thing to note is that you want the wave to continue to... wave, when it loops. As in, if your waveform is pointing up at the beginning of the sound (it starts by going up from 0db), it should be coming up to 0db from below at the end, so that if you were to copy+paste the sound right after itself, it forms a nice wave instead of a peak.
I'm new to Video technology, so any feedback (such as if I've underspecified the problem) would be greatly appreciated.
I need to display an animation (currently composed of about a thousand PNGs) on Windows and am trying to determine the best video codec or parameters for the job.
Video playback must be smooth at 30 fps
Output display is 1920x1080 on a secondary monitor
Quality does not matter (within limits)
Will be displaying alpha blended animation on top, so no DXVA
Must run on older hardware (Core Duo 4400 + nVidia 9800)
Currently using DirectShow to display the video.
Question:
Is it easier on the CPU to shrink the source to 1/2 size (or even 1/4) and have the CPU stretch it at run time?
Is there a video codec that is easier on the CPU than others?
Are there parameters for video codecs that mean less decompression is required? (The video will be stored on the HD, so size doesn't matter except as it impacts program performance).
So far:
- H.264 from ffmpeg defaults produces terrible tearing and some stuttering.
- Uncompressed video from VirtualDub produces massive stuttering.
There are so many different degrees of freedom to this problem, I'm flailing. Any suggestions by readers would be much appreciated. Thank you.
MJPEG should work. I used it for 1080i60 some 3 years back, and the playback was never an issue. Even encoding worked on-the-fly with a machine of quite similar performance to what you describe.
File size will be about 10MB/s for good quality video.
Shrinking the video will help, because if you are drawing the video to screen using e.g. DirectX, you can use the GPU to stretch it.