I'm looping this sound file: http://engy.us/misc/ThrusterLoop.wav . It plays perfectly in Windows on any number of players, but when I load it in as a SoundEffect in XNA it has these annoying clicks at the start and end of playback.
If I loop it for a while it doesn't do any of these annoying clicks in the middle. If I play it as a standalone sound, it will still click at the start. It's doing this clicking both in the emulator and on my physical device.
Why is it doing this? Is there something wrong with my sound file? It's a 16-bit Stereo 44.1 kHz PCM WAV file, which I assumed was pretty standard.
(edit2) I captured the sound produced through playback through XNA and compared it with the original waveform. Take a look:
http://engy.us/pics/Waveform_Original.png
http://engy.us/pics/Waveform_EmulatorXNA.png
Something is pretty screwed up with that playback! The two large amplitude changes must have been the clicks I heard. It seems to scramble up the first bit somewhat. Putting silence at the start probably helped some people because scrambled up silence doesn't produce any clicks.
Use a program like Audacity to take a look at the waveform of your sound. You can reduce or eliminate 'clicking' by properly lining up the waveform so that it starts and ends at the 0db mark (center line). In Audacity you can do this by using the Fade In and Fade Out at the front and back respectively, though this will cause the volume of your sound to 'pulse'. To get around that, zoom in as much as possible and only use Fade In / Out on the smallest possible selectable area at the front and back.
One thing to note is that you want the wave to continue to... wave, when it loops. As in, if your waveform is pointing up at the beginning of the sound (it starts by going up from 0db), it should be coming up to 0db from below at the end, so that if you were to copy+paste the sound right after itself, it forms a nice wave instead of a peak.
Related
Assuming all my assets are linked properly. Using the oculusRiftEffect.js or the StereoEffect.js - how can I make my html file cardboard compatible? My Gist link is below
https://gist.github.com/4d9e4c81a6b13874ed52.git
Please advise
Exactly what are you trying to do? "Stereo effect" isn't very descriptive.
However assuming that you have stereo video files (one for left eye, one for right eye) you'd just play them back in a left sphere (seen by the left eye), and a right sphere for the right eye. The spheres are offset by the interpupillary distance (IPD) - usually about 55mm. (it'd actually be whatever the stereo videos are offset by).
So, you might ask - what happens when I turn around? The IPD goes negative. When I look up or down? It goes to zero. Welcome to stereo video.
Note that there are ways around this, but you're not going to get them with a GoPro without a lot of special processing. At best you can sync the IDP direction with the averaged lens separation for the video streams, but you'll always get stitching errors. The equirectangular format (i.e square video) isn't the best, you'll always get the wrong answer at the poles. Not using a sphere is probably how it will evolve (but it's the simplest solution till we get a better one)
But, given all that, give it a shot, the brain is very forgiving - and it'll look 3D-ish most of the time. This stuff is being refined all the time.
I am trying to detect and hide a logo that dynamically positioned in a video.
In this video the logo is positioned at the top and after a few minutes down etc.
it's possible to detect every time the logo when it changes place and hide it with ffmpeg?
I tried with delogo but I must tell the position x/y, so it is not possible in my case!
Thank you very much!
Edit: It cannot and should not be done. Ignore the everything after this.
Simple idea but likely a complex execution.
I would be shocked if ffmpeg had functionality to do this in a strait-forward manner. You could manually find every logo shift, use ffmpeg to cut the video on those points, manually use your delogo tool for each, then concat them back together again?
If you want to automate this then you're likely doing to have to do a bit of image processing coding. I'm not going to code it but I'll go through a few highlights.
First, ask what you are looking for?
Is the logo a static block that covers what it is over? Is it slightly transparent? Does it change size? Does it change in intensity?
I don't know how much effort went into creating the logo so I can only make guesses.
If its static and entirely in the foreground, then this is a simple search followed by a replacement with whatever you want. easy
If its slightly transparent but of constant size, you will want to reverse-engineer the template they used, search each frame for that pattern, then undo it.
If they got fancier and you want this to be automated then you'll be delving into machine learning and more advanced techniques.
I can think of a whole plethora of nice features for an application like this but that's enough to likely make a start.
I am working on a game and I need to have two characters talking to eachother. I know that XNA does not allow me to play a movie other than fullscreen so I need to actually "play" the animation inside my game app in a different manner. The characters have animated environments around them so the animations are not simple head movements and as such, animating the characters via keyframing in a 3d model is not an option. The dialogue between the two characters is a cut-scene between levels, so it is not part of the gameplay itself.
I am not sure what the best approach to this would be so if you have any ideas, please let me know.
This is what I thought of so far:
1. Create all the individual frames for the characters as images. Load these images in a spritesheet and go through each frame at my desired framerate.
The problem with this approach is that the maximum spritesheet texture of 2048x2048 would not allow for too many frames as the characters are something around 300x200. The other problem is that I have two characters so, the minimum scenario would require me to create in memory two 2048x2048 spritesheets... and I'd like to keep the memory requirements low.
2. Load a batch of frames (images), play them, then de-allocate them and load the next set. I know that in general it is not a good idea to load lots of small textures and switch between them in drawing calls (performance wise) but it seems as though I have no other choice in this case.
I am afraid that unloading stuff from memory and loading other stuff in while in the Update-Draw loop would slow down the entire scene... so not sure if this is a sane approach.
The other idea is to make an mp4/wmv with the whole thing [char animation, subtitles of the dialogue, etc] but the interface that hosts these characters would not be as "smooth" as when rendered directly, etc...
Thank you for all your suggestions,
Marius
EDIT 1:
I have tested scenario number 2 and it seems that the performance is OK.
I have used scenario 2. It works for my particular case but I am sure it won't work for all cases.
What happens during a display mode change (resolution, depth) on an ordinary computer? (classical stationarys and laptops)
It might not be so trivial since video cards are so different, but one thing is common to all of them:
The screen goes black (understandable since the signal is turned off)
It takes many seconds for the signal to return with the new mode
and if it is under D3D or GL:
The graphics device is lost and all VRAM objects must be reloaded, making the mode change take even longer
Can someone explain the underlying nature of this, and specifically why a display mode change is not a trivial reallocation of the backbuffer(s) and takes such a "long" time?
The only thing that actually changes are the settings of the so called RAMDAC (a Digital Analog Converter directly attached to the video RAM), well today with digital connections it's more like a RAMTX (a DVI/HDMI/DisplayPort Transmitter attached to the video RAM). DOS graphics programmer veterans probably remember the fights between the RAMDAC, the specification and the woes of one's own code.
It actually doesn't take seconds until the signal returns. This is a rather quick process, but most display devices take their time to synchronize with the new signal parameters. Actually with well written drivers the change happens almost immediately, between vertical blanks. A few years ago, when the displays were, errr, stupider and analogue, after changing the video mode settings, one could see the picture going berserk for a short moment, until the display resynchronized (maybe I should take a video of this, while I still own equipment capable of this).
Since what actually is going on is just a change of RAMDAC settings there's also not neccesary data lost as long as the basic parameters stays the same: Number of Bits per Pixel, number of components per pixel and pixel stride. And in fact OpenGL contexts usually don't loose their data with an video mode change. Of course visible framebuffer layouts change, but that happens also when moving the window around.
DirectX Graphics is a bit of different story, though. There is device exclusive access and whenever switching between Direct3D fullscreen mode and regular desktop mode all graphics objects are swapped, so that's the reason for DirectX Graphics being so laggy when switching from/to a game to the Windows desktop.
If the pixel data format changes it usually requires a full reinitialization of the visible framebuffer, but today GPUs are exceptionally good in maping heterogenous pixel formats into a target framebuffer, so no delays neccesary there, too.
I am porting a game from iPad to Mac.
Every time I start the game, certain set of sounds when they are being played, have an irritating noise at the end of playback, much like a short duration of heavy static noise.
Also the sounds which produce the noise is not the same on every execution. Each time a different set of sounds have the noise at the end.
Is there any OpenAL settings regarding to this situation to fix it.
Solutions tried:-
Tried to convert the mp3 files into some higher and lower bitrate format and tried to playback. Still noise persists.
It sounds like (get it?) you're passing in a buffer that is larger than your data, and the noise at the end is the result of attempting to interpret those bytes as sound.