Trying to play a song in processing. The song is in the program's data file and the processing sound library is installed.
Code -
import processing.sound.*;
// A Sample object (for a sound)
SoundFile song;
void setup() {
size(480, 270);
song = new SoundFile(this, "test.mp3");
song.play();
}
void draw() {
}
Error shown is - NullPointerException.
Double check the path to the audio file and if the path is valid double check audio file itself.
There are many encoding presets and maybe not all of them are supported by the Processing sound library.
To be on the safe side try exporting your audio file as an unsigned 16 bit, 44100 Hz uncompressed wav file (from a DAW tool like Audacity or similar)
Related
I am generating an rtsp stream using gstreamer in an iOS app and trying to use ffmpeg in a Mac OS X audio driver that I wrote using XCode to strip the audio out of the stream and then pump the audio to Skype or Zoom or whatever. All the code is written in the 'C' old-fashioned programming language. I do get a FILE* that is not NULL back from a popen function call to execute ffmpeg on the input rtsp stream. But once I get that FILE* object and try to read binary data from it, it returns that zero bytes have been read. Here is the code:
FILE *readPipeFromFFMPEG = popen("Contents/MacOS/ffmpeg -i rtsp://192.168.0.30:8554/test -vn -acodec copy -flush_packets pipe:1", "r+");
int pipeFD = fileno(readPipeFromFFMPEG);
char *buffer = (char*)calloc(inIOBufferFrameSize * 8, 1);
numBytesRead = read(pipeFD, buffer, inIOBufferFrameSize * 8);
free(buffer);
pclose(readPipeFromFFMPEG);
but numBytesRead is always coming back as zero. Does anybody have any clue what I need to do to get this working properly? It seems like maybe a permissions issue where I do not have permission to read from the stream? Or maybe my ffmpeg parameters are incorrect? I am able to open up the stream in VLC and OBS Studio no problem and it displays the video frames and plays the audio. I am really stuck I need help! I must be missing something totally obvious because when I run OBS Studio or VLC it shows in the iPhone app the requests from the client because it prints out information that the audio and video packets are being requested but when the audio driver is running nothing is printed out in the iPhone app.
Is it possible to create a cache for hls type videos when playing in exoplayer , so that once video completely streamed then no need to load again, and start playing immediately for the next time when play button clicked, If possible please provide any solution? The video format is .m3u8 type.
For non ABR streams, i.e. not HLS or DASH etc
There is a well used library which provides video caching functionality:
https://github.com/danikula/AndroidVideoCache
Bear in mind that large videos will use a lot of memory so you may want to consider when and where you want to cache.
Update for ABR streams
Adaptive Bit Rate Streaming protocols like HLS or DASH typically have segmented multiple bit rate versions of the video, and the player will download the video segment by segment, choosing the bitrate for the next segment depending on the network conditions and the device capabilities.
For this reason, simply storing what you are viewing may not give you the result you want - for example if you have some network congestion part way through the video you may receive lower quality segments, which you probably don't want for a video you will watch multiple times.
You can download or play a video, forcing the stream to always select from one specific resolution by using a track selector. ExoPlayer documentation includes some info here:
https://exoplayer.dev/downloading-media.html
In an older blog post (2 years old but the DownloadHelper part is still relevant, I think), Google provide info on how to use the DownloadHelper - https://medium.com/google-exoplayer/downloading-adaptive-streams-37191f9776e.
This includes the example:
// Replace with HlsDownloaderHelper or SsDownloadHelper if the stream is HLS or SmoothStreaming
DownloadHelper downloadHelper =
new DashDownloadHelper(manifestUri, manifestDataSourceFactory);
downloadHelper.prepare(
new Callback() {
#Override
public void onPrepared(DownloadHelper helper) {
// Preparation completes. Now other DownloadHelper methods can be called.
List<TrackKey> trackKeys = new ArrayList<>();
for (int i = 0; i < downloadHelper.getPeriodCount(); i++) {
TrackGroupArray trackGroups = downloadHelper.getTrackGroups(i);
for (int j = 0; j < trackGroups.length; j++) {
TrackGroup trackGroup = trackGroups.get(j);
for (int k = 0; k < trackGroup.length; k++) {
Format track = trackGroup.getFormat(k);
if (shouldDownload(track)) {
trackKeys.add(new TrackKey(i, j, k));
}
}
}
}
DownloadAction downloadAction = downloadHelper.getDownloadAction(null, trackKeys);
DownloadService
.startWithAction(context, MyDownloadService.class, downloadAction, true);
}
private boolean shouldDownload(Format track) {...}
#Override
public void onPrepareError(DownloadHelper helper, IOException e) {...}
});
The code above look at the manifest file - this is the index file for DASH or HLS which lists the individual tracks and provides info, e.g. URLs, of there to find them.
It loops through each track that it finds and calls a function which you can define as you want to decide whether to include or exclude this track from the download.
To use track selection when playing back a streamed video, you can control this programatically using the DefaultTrackSelector: https://exoplayer.dev/track-selection.html. This link includes an example to select SD video and German Audio track:
trackSelector.setParameters(
trackSelector
.buildUponParameters()
.setMaxVideoSizeSd()
.setPreferredAudioLanguage("deu"));
The standard player also allows a user select the track from the controls if you are displaying controls - the ExoPlayer demo app () includes this functionality and the user view should look something like:
One note - ABR streaming is quite complex and requires extra processing and storage on the server side. If you expect to only use one quality level then it may make more sense to simply stream or download the videos as mp4 etc.
I am trying to import an .avi file for frame processing.
Import["c:\\windows\\clock.avi","Elements"]
Import["c:\\windows\\clock.avi","VideoEncoding"]
Import["c:\\windows\\clock.avi"]
Import["c:\\windows\\clock.avi",{"Frames",{5,6}}]
Out[115]= {Animation,BitDepth,ColorSpace,Data,Duration,FrameCount,FrameRate,
Frames,GraphicsList,ImageList,ImageSize,VideoEncoding}
Out[116]= rle8
Out[117]= {1,2,3,4,5,6,7,8,9,10,11,12}
During evaluation of In[115]:= Import::fmterr: Cannot import data as video format.
During evaluation of In[115]:= Import::fmterr: Cannot import data as video format.
Out[118]= {$Failed,$Failed}
It reports the same error with all avi files I tested.
Any hints?
AVI is a container format. You can encode movies with totally bizar and rare formats and still call it .avi.
You could use a video format converter like freemake to convert your movie into a format Mathematica can use. Check with Internal`$VideoEncodings what kind of internal formats are recognized.
Quite often, Quicktime (.mov) works easiest. AVIs sometimes load just fine, but don't display at all even if I have the correct codec on board and all my players can play it.
If all else fails, you can try VirtualDub. It can read AVIs and split them into separate images, which can easily be imported into mma.
EDIT
I recall from my most recent video project a total failure to read the AVIs I got from having the FireFox plugin DownloadHelper download a certain YouTube movie (though it played in all the players I have, VLC, Media Player Classic, Windows Media player etc.). A conversion by DH to .mov worked but DH inserts its logo into it. So finally I resorted to a download with FreeMake and conversion to individual frames by means of VirtualDub.
I am recording audio using the XNA Microphone class and saving the recorded data in isolated storage in wav format.If the length of audio is small my app is working fine.But as it increases the memory consumed by the app also increases,which drastically slows down the device.The following code is used to play the audio
using (IsolatedStorageFile isoStore = IsolatedStorageFile.GetUserStoreForApplication())
{
using (IsolatedStorageFileStream fileStream = isoStore.OpenFile(AudioFilePath, FileMode.Open))
{
sound = SoundEffect.FromStream(FileStream);
sound.Play();
}
}
Any suggestion on how to handle the memory issue while playing large audio files.Or how can i save the PCM in other formats (wma,mp3) to reduce the size.
SoundEffect isn't intended for playing long pieces of audio. As the name suggests it is intended for short pieces and also playing lots of them, possibly at the same time.
To play longer pieces of audio you shoudl consider the MediaElement.
i'm currently reading the documentation of MSDN to render a stream to an audio renderer..
or in other word, to play my captured data from microphone.
http://msdn.microsoft.com/en-us/library/dd316756%28v=vs.85%29.aspx
this example provides example.
My problem now is that i couldn't really understand the project flow.
I currently have a different class storing the below parameters which i obtained from the capture process.
these parameters will be continuously re-written as the program captures streaming audio data from the microphone.
BYTE data;
UINT32 bufferframecount;
DWORD flag;
WAVEFORMATEX *pwfx;
My question is,
How does really the loadData() function works.
is it suppose to grab the parameter i'm writing from capture process?
how does the program sends the data to audio renderer, and play it in my speaker.
The loadData() function fills in audio pointed to by pData. The example abstracts the audio source, so this could be anything from a .wav file to the microphone audio that you already captured.
So, if you are trying to build from that example, I would implement the MyAudioSource class, and have it just read PCM or float samples from a file, whenever loadData() is called. Then, if you run that program, it should play the audio from the file out the speaker.