When playing the same sound multiple times, it sounds OK the first time but becomes distorted (muted, as if it was far away) the next times. I'm playing from the same buffer (initialized once from a .wav file on startup) using openAL. Weirdly, the same happens when playing the sound in Apple apps (Xcode, Quicktime Player, Safari) but not in many third party apps (VLC, Chrome,...). Same code works fine on iOS, and the same sound file is also played correctly in the old Carbon version of the app on MacOS.
One of the sounds that exhibits the problem is http://www.colibricks.com/Sound131.wav.
You can hear it for yourself by opening that link in Safari and pressing the Play button. The first time it sounds OK, the next times it's like it's really far away. Reload the page, and it sounds OK again the first time you try, then far away again.
Open the same page in Chrome, though, and it sounds OK every time.
Strangely, the old compiled 32-bit Carbon version, which I can no longer compile in current versions of XCode but which was using the same sound files, sounds fine!
I have tried opening the file in Audacity, exporting it in many different formats (wav, aiff, 16 or 32 bit,...), and playing those files in Quicktime Player. Always the same result: first play OK, then distorted.
This is the code I'm using to play the sounds (after having set up all the sources and buffers in pretty much the standard way)
alSourcef(source[i], AL_GAIN, volume * (1.f/256.f));
alSourcei(source[i], AL_BUFFER, 0);
alSourcei(source[i], AL_BUFFER, bufID[which-128]);
alSourcePlay(source[i]);
There seems to be some kind of bug in openAL or some other part of the audio system that deforms sounds when they are played repeatedly, not just in my app but in other Apple apps too.
I would like to try other solutions to play the sounds (openAL is being deprecated anyway) but am having a hard time finding simple sample code for CoreAudio or AudioAVEngine that just plays sounds at different volumes left and right (or extremely basic panning). My needs are really simple: load a bunch of very short sounds into buffers, then play them at different volumes left/right during the game. No special 3d effects and reverbs and all that stuff. I'm kind of afraid of going through all the work to implement AudioAVEngine only to find out that the problem remains. And CoreAudio seems a bit daunting, unless someone can point me to some good usable code for that? And will that fix the distorted sound?
Related
I saw a cool video of someone coding a NES emulator. If you watch it, you see that there are no pauses, no hesitations, .. He gives a bit of information on how this is done on his site. The idea is based on tool-assisted videogame speedruns.
Does anyone know tools that can be used to create a similar effect in a modern IDE, like visual studio? Normal video capture programs like CamStudio don't cut it.
You'll want to capture in high definition so the text is readable.
There's nothing worse than a programming tutorial on YouTube where
you can't read what the guy is typing.
You'll also want to be able to cut out parts of the recording where there is nothing going on. Hesitations and pauses for thoughts for example.
It might be a good idea to switch to a low screen resolution (1024x768 perhaps) and run the IDE in a fullscreen mode so that there is no unneccessary clutter in the video and the code is easy to see.
Take a look at that list SQLGuru created for a video capture program which makes the cut.
I have managed to locate a list of 17 potential apps that could do the trick for you.
check them out here: http://www.freetech4teachers.com/2012/09/17-free-tools-for-creating-screen.html
I need to access web camera using Java. This is what I want to do
Access web cam
Now the user can see web cam working because his face is visible on screen
(have heard some libs are there which doesn't show the video output of webcam)
when user click save button, take a snapshot and save it
I have tried number of ways to do this, from a long time.
JMF - Now it is dead
FMJ - Now it is dead too
VLCJ - too much because I am not creating a music/video player and it expect VLC to be installed
Xuggler - too much and hard work
JMyron - didn't work
JavaFX - I thought it could do it, but seems like it can't
I am even satisfied if the library is just ONLY doing the above mentioned, because that's enough for me. But I expect it to be simple too. Really great if it is not using DLLs, because it is not platform independent if it does. Really appreciate if it can DETECT the camera, without manually passing the camera name and other info as have do in VLCJ (because there might be thousands of camera brands, so I can't create a list of thousand elements in it). And, I am creating a desktop application, not web app.
If you know a library like this, please be kind enough to let me know. Other libraries (which might not suit to all of my requirements, but suits to the basic requirement) also welcome. Please help
I think the project you are looking for is: https://github.com/sarxos/webcam-capture (I'm the author)
There is an example working exactly as you've described - after it's run, the window appear where, after you press "Start" button, you can see live image from webcam device and save it to file after you click on "Snapshot" (source code available, please note that FPS counter in the corner can be disabled):
The project is portable (WinXP, Win7, Win8, Linux, Mac, Raspberry Pi) and does not require any additional software to be installed on the PC.
API is really nice and easy to learn. Example how to capture single image and save it to PNG file:
Webcam webcam = Webcam.getDefault();
webcam.open();
ImageIO.write(webcam.getImage(), "PNG", new File("test.png"));
For a some-what small (at least hopefully) project, I am hoping to gain access to the current audio being played through the "main line" (i.e. what is heard through the speakers.) Specifically, I'd like to create a visual equalizer of the audio currently being played. I do not wish to capture or "tamper" with the audio in any way, just run a little analysis on it. That being said, I'd imagine access to such information is not handed out nicely in a high-level API.
I noticed a similar question which is concerned with looking at system sound. The accepted answer points to looking into Soundflower's source code. I am not completely adverse to doing this but I'd like to ensure there isn't a simpler way before I got into it (especially because I have no real audio programming experience, especially at the system level.)
Any input is very much appreciated,
--Sam
There is no simple way to do this on OS X. You really have to do this from a kext, unfortunately.
We want to make an audio based web based app that will have many sound snippets. We want to cache these files so that performance is good and not dependent on network speed. Can HTML5 cache audio for offline mode?
It certainly seems to me that this should work, and I can't find any documentation that says it shouldn't work (either from the W3C or from vendors like Apple), but putting audio files as resources in cache manifest just doesn't seem to work with Safari on the iPad & iPhone at least.
Sounds play fine when the app is in online (although it seems to load them anew each time and not cache them) and it doesn't complain about the resources not being there when in offline mode (which is does immediately if you forget to include a JavaScript, CSS, HTML or image resource).
Instead of complaining (or loading); if the element has a control that control is replaced with a box that says "Cannot play audio file.". Alternatively, if it's an element without a control - i.e. accessed via JavaScript all to .play() - then it simply just doesn't play (it doesn't cause any errors, there is just no sound, the JavaScript otherwise continues to execute normally).
I have tested this with pretty small (<20k) files and the result is the same, so it doesn't seem to be size related, just a blanket refusal to copy. I'm not sure if you can encode sound as a resource in a page (e.g. encoded in base64) the way you can with images, but I'm going to investigate that option - I suspect that would be possible. I tried encoding audio data as data URI strings and even tried generating audio on the fly - both work fine in Safari on the desktop but do not work on the iPhone / iPad OS (at least on version 3.x - I have not tried on iOS 4, but it won't be out for a week, and is not expected for the iPad for a few months, even if they do fix it).
I would guess the refusal to cache sound files in iOS is an implementation bug or intended limited functionality. It's certainly annoying and a show stopper for a lot of use cases.
I am not sure what happens with other HTML5 clients, I'd be interested to know (in particular on Andriod). Google's support for audio hasn't been stellar either, so it may suffer from the same limitations.
You could always develop a decoding/encoding layer that talks to the client SQLlite DB
I don't see any reason why you can't specify audio files in the cache manifest.
We had an arcade/redemption game running on Win98, but hardware which can run it has finally gone obsolete. The game used a number of scaling effects, some through the 3D path, and played some tricks moving things in and out of video memory. If I was to undertake porting it to run on Windows 7, how much trouble would it likely be? Would it mostly be recompilation, or have the APIs undergone such transformation that I might as well re-write the device interfaces?
Don't think of it as porting to Win7. Just simply port to DX9 and let DX do the work with the Win7 parts. Infact, you could probably just leave it as is and it shuold run -- but you mention you do crazy things with video memory that I assume has nothing to do with DX. (ie either through GDI or some other hack?). Anyway, the DX7, 8 and 9 APIs all have quite drastic differences. But the nice thing is they're all backwards compatible. If the code you have is pure dx7, try compiling against the latest SDK and see if it works on win7 straight off.
It's been a while since I've written any DirectX7 code (or Direct X code at all) but if I recall there were some significant API changes event between 7 and 8 - let along 9 or 10 - that would make such a change a bit more difficult. Specifically I think the major change was that they refactored the system after 7 to merge DirectDraw into Direct3D so that the two systems were no longer completely separate between 7 and 8. I haven't look at it since then, but I suspect that given the number of new coding methods and like that the API has change quite a bit so it is probably going to be a bit of a project to make these changes rather than mostly recompilation like you might have hoped.
You EVER moved things in and out of video memory? shudder
Still ... its quicker to do that now than when DX7 was around. What exactly were you doing? From your description its impossible to say how easy it would be. A DX7 app should still run on Windows 7, I can't think of what odd features you may have used that would cause it to break.
Also converting an application to DX9 from 7 is not actually all that hard (Converting to DX10+ would be a nightmare). They are still relatively similar ... the main thing that has changed since those days is the shortening of things like D3DTRANSFORMSTAGESTATE_* or D3DRENDERSTATE_* to D3DTSS_* or D3DRS_*.
Edit: The biggest change I can think of that has happened since DX7 is that graphics card manufacturers have dropped support for palettised textures which "could" break some old apps on modern machines. That really is a very simple fix though ...
Edit2: Decompressing things from disk into a texture can be a bit of a pain. Your main issue is the fact that you end up suffering A performance hit when you create the texture. However if you have a load of textures already created and open then you can load to the relevant texture as and when you please. You only suffer a lock/unlock hit. That can be mitigated by loading a resource a few frames in advance. If you do this, though, it will no doubt require multi-threading and calling D3D from multiple threads. If you do this set the multi-thread flag on the device.