BufferedSoundStream can't play wav files - xamarin

I'm making an app for my drum classes and to make it cross-platform I've chosen Urho.Sharp, because it has low level Sound API as well as rich graphics capabilities.
As a first step I'm making a metronome app and for that I'm working with BufferedSoundStream adding here audio and then needed silence, as described here: https://github.com/xamarin/urho-samples/blob/master/FeatureSamples/Core/29_SoundSynthesis/SoundSynthesis.cs
But the resulting sound is not a sound at all, like random bits got into buffered stream.
This is my code:
///
/// this code initialize sound subsystem
///
void CreateSound()
{
// Sound source needs a node so that it is considered enabled
node = new Node();
SoundSource source = node.CreateComponent<SoundSource>();
soundStream = new BufferedSoundStream();
// Set format: 44100 Hz, sixteen bit, stereo
soundStream.SetFormat(44100, true, true);
// Start playback. We don't have data in the stream yet, but the
SoundSource will wait until there is data,
// as the stream is by default in the "don't stop at end" mode
source.Play(soundStream);
}
///
/// this code preload all sound resources
///
readonly Dictionary<PointSoundType, string> SoundsMapping = new Dictionary<PointSoundType, string>
{
{PointSoundType.beat, "wav/beat.wav"},
{PointSoundType.click, "wav/click.wav"},
{PointSoundType.click_accent, "wav/click_accent.wav"},
{PointSoundType.crash, "wav/crash.wav"},
{PointSoundType.foot_hh, "wav/foot_hh.wav"},
{PointSoundType.hh, "wav/hh.wav"},
{PointSoundType.open_hh, "wav/open_hh.wav"},
{PointSoundType.ride, "wav/ride.wav"},
{PointSoundType.snare, "wav/snare.wav"},
{PointSoundType.tom_1, "wav/tom_1.wav"},
{PointSoundType.tom_2, "wav/tom_2.wav"},
};
Dictionary<PointSoundType, Sound> SoundCache = new Dictionary<PointSoundType, Sound>();
private void LoadSoundResources()
{
// preload all sounds
foreach (var s in SoundsMapping)
{
SoundCache[s.Key] = ResourceCache.GetSound(s.Value);
Debug.WriteLine("resource loaded: " + s.Value + ", length = " + SoundCache[s.Key].Length);
}
}
///
/// this code fill up the stream with audio
///
private void UpdateSound()
{
// Try to keep 1/10 seconds of sound in the buffer, to avoid both dropouts and unnecessary latency
//float targetLength = 1.0f / 10.0f;
// temporary increase buffer to 1s
float targetLength = 1.0f;
float requiredLength = targetLength - soundStream.BufferLength;
if (requiredLength < 0.0f)
return;
uint numSamples = (uint)(soundStream.Frequency * requiredLength);
// check if stream is still full
if (numSamples == 0)
return;
var silencePause = new short[44100];
// iterate and play all sounds
SoundCache.All(s =>
{
soundStream.AddData(s.Value.Handle, s.Value.DataSize);
// add silencio
soundStream.AddData(silencePause, 0, silencePause.Length);
return true;
});
}

Make sure your wav files are in the resource cache. Then don't play the BufferedSoundStream, but the Urho.Audio.Sound sound. This is just a different override of the same method Urho.Audio.SoundSource.Play(), but it works.
int PlaySound(string sSound)
{
var cache = Application.Current.ResourceCache;
Urho.Audio.Sound sound = cache.GetSound(sSound);
if (sound != null)
{
Node soundNode = scene.CreateChild("Sound");
Urho.Audio.SoundSource soundSource = soundNode.CreateComponent<Urho.Audio.SoundSource>();
soundSource.Play(sound);
soundSource.Gain = 0.99f;
return 1;
}
return 0;
}
Since you're using urhosamples, you can start each drum sample from the override update something like this:
public float fRun = 0.0f;
public int iRet = 0; // keep counting the played sounds
public override void OnUpdate(float timeStep)
{
fRun = fRun + timeStep;
int iMS = (int)(10f * fRun); // tenth of seconds
if (iMS == 100) iRet = iRet + PlaySound("wav/hh.wav");
if (iMS == 120) iRet = iRet + PlaySound("wav/hh.wav");
if (iMS == 140) iRet = iRet + PlaySound("wav/hh.wav");
if (iMS == 160) iRet = iRet + PlaySound("wav/open_hh.wav");
if (iMS >= 160) fRun = 0.8f;
}

Related

How do I change the speed of an AnimationTimer in JavaFX?

I'm trying to change the speed of an AnimationTimer so the code runs slower, here is the code I have so far:
AnimationTimer timer = new AnimationTimer() {
#Override
public void handle(long now) {
if (upOrDown != 1) {
for (int i = 0; i < 4; i++) {
snakePositionDown[i] = snake[i].getX();
snakePositionDownY[i] = snake[i].getY();
}
snake[0].setY(snake[0].getY() + 25);
for (int i = 1; i < 4; i++) {
snake[i].setX(snakePositionDown[i - 1]);
snake[i].setY(snakePositionDownY[i - 1]);
}
leftOrRight = 2;
upOrDown = 0;
}
}
};
timer.start();
How would I make the AnimationTimer run slower?
Thanks in advance!
You could use a Timeline for this purpose. Adjusting the Timeline.rate property also allows you to update the "speed":
// update once every second (as long as rate remains 1)
Timeline timeline = new Timeline(new KeyFrame(Duration.seconds(1), event -> {
if (upOrDown != 1) {
for (int i = 0; i < 4; i++) {
snakePositionDown[i] = snake[i].getX();
snakePositionDownY[i] = snake[i].getY();
}
snake[0].setY(snake[0].getY() + 25);
for (int i = 1; i < 4; i++) {
snake[i].setX(snakePositionDown[i - 1]);
snake[i].setY(snakePositionDownY[i - 1]);
}
leftOrRight = 2;
upOrDown = 0;
}
}));
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
...
// double speed
timeline.setRate(2);
An AnimationTimer's handle() method is invoked on every "pulse" - i.e., every time a frame is rendered. By default, the JavaFX toolkit will attempt to do this 60 times per second, but that is not in any way guaranteed. It can be changed by setting a system property, and it is possible that future versions of JavaFX will attempt to execute pulses more frequently. If the FX Application Thread has a large amount of work to do, then pulses may occur less frequently than the target rate. Consequently, the code in your handle() method needs to account for the amount of time since the last update.
The parameter passed to the handle(...) method represents the current system time in nanoseconds. So a typical way to approach this is:
AnimationTimer h = new AnimationTimer() {
private long lastUpdate; // Last time in which `handle()` was called
private double speed = 50 ; // The snake moves 50 pixels per second
#Override
public void start() {
lastUpdate = System.nanoTime();
super.start();
}
#Override
public void handle(long now) {
long elapsedNanoSeconds = now - lastUpdate;
// 1 second = 1,000,000,000 (1 billion) nanoseconds
double elapsedSeconds = elapsedNanoSeconds / 1_000_000_000.0;
// ...
snake[0].setY(snake[0].getY() + elapsedSeconds * speed);
// ...
lastUpdate = now;
}
}

Save image from Elgato Game Capture using DirectShow

Im using a Elgato Game Capture HD60 to live prewview a GoPro Hero 5 in my application. Now i want to save the stream as a JPG in my folder. But i cant find out how to.
To bind the pin
DsROTEntry rot; //Used for remotely connecting to graph
IFilterGraph2 graph;
ICaptureGraphBuilder2 captureGraph;
IBaseFilter elgatoFilter;
IBaseFilter smartTeeFilter;
IBaseFilter videoRendererFilter;
Size videoSize;
private IPin GetPin(PinDirection pinDir, IBaseFilter filter)
{
IEnumPins epins;
int hr = filter.EnumPins(out epins);
if (hr < 0)
return null;
IntPtr fetched = Marshal.AllocCoTaskMem(4);
IPin[] pins = new IPin[1];
epins.Reset();
while (epins.Next(1, pins, fetched) == 0)
{
PinInfo pinfo;
pins[0].QueryPinInfo(out pinfo);
bool found = (pinfo.dir == pinDir);
DsUtils.FreePinInfo(pinfo);
if (found)
return pins[0];
}
return null;
}
private IPin GetPin(PinDirection pinDir, string name, IBaseFilter filter)
{
IEnumPins epins;
int hr = filter.EnumPins(out epins);
if (hr < 0)
return null;
IntPtr fetched = Marshal.AllocCoTaskMem(4);
IPin[] pins = new IPin[1];
epins.Reset();
while (epins.Next(1, pins, fetched) == 0)
{
PinInfo pinfo;
pins[0].QueryPinInfo(out pinfo);
bool found = (pinfo.dir == pinDir && pinfo.name == name);
DsUtils.FreePinInfo(pinfo);
if (found)
return pins[0];
}
return null;
}
And to start the stream
private void button1_Click(object sender, EventArgs e)
{
//Set the video size to use for capture and recording
videoSize = new Size(1280, 720);
//Initialize filter graph and capture graph
graph = (IFilterGraph2)new FilterGraph();
captureGraph = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
captureGraph.SetFiltergraph(graph);
rot = new DsROTEntry(graph);
//Create filter for Elgato
Guid elgatoGuid = new Guid("39F50F4C-99E1-464A-B6F9-D605B4FB5918");
Type comType = Type.GetTypeFromCLSID(elgatoGuid);
elgatoFilter = (IBaseFilter)Activator.CreateInstance(comType);
graph.AddFilter(elgatoFilter, "Elgato Video Capture Filter");
//Create smart tee filter, add to graph, connect Elgato's video out to smart tee in
smartTeeFilter = (IBaseFilter)new SmartTee();
graph.AddFilter(smartTeeFilter, "Smart Tee");
IPin outPin = GetPin(PinDirection.Output, "Video", elgatoFilter);
IPin inPin = GetPin(PinDirection.Input, smartTeeFilter);
graph.Connect(outPin, inPin);
//Create video renderer filter, add it to graph, connect smartTee Preview pin to video renderer's input pin
videoRendererFilter = (IBaseFilter)new VideoRenderer();
graph.AddFilter(videoRendererFilter, "Video Renderer");
outPin = GetPin(PinDirection.Output, "Preview", smartTeeFilter);
inPin = GetPin(PinDirection.Input, videoRendererFilter);
graph.Connect(outPin, inPin);
//Render stream from video renderer
captureGraph.RenderStream(PinCategory.Preview, MediaType.Video, videoRendererFilter, null, null);
//Set the video preview to be the videoFeed panel
IVideoWindow vw = (IVideoWindow)graph;
vw.put_Owner(pictureBox1.Handle);
vw.put_MessageDrain(this.Handle);
vw.put_WindowStyle(WindowStyle.Child | WindowStyle.ClipSiblings | WindowStyle.ClipChildren);
vw.SetWindowPosition(0, 0, 1280, 720);
//Start the preview
IMediaControl mediaControl = graph as IMediaControl;
mediaControl.Run();
}
Can you run the filter graph successfully, and which step did you get error info?
You can get sample code \Samples\Capture\PlayCap to see how to build video capture filter graph.
If you want to get a video snapshot, you can get the sample code at \Samples\Capture\DxSnap.
You can modify the video source index and video snapshot size to get what you want.
const int VIDEODEVICE = 0; // zero based index of video capture device to use
const int VIDEOWIDTH = 2048; // Depends on video device caps
const int VIDEOHEIGHT = 1536; // Depends on video device caps
const int VIDEOBITSPERPIXEL = 24; // BitsPerPixel values determined by device

How to run processing script on multiple frames in a folder

Using processing I am trying to run a script that will process a folder full of frames.
The script is a combination of PixelSortFrames and SortThroughSeamCarving.
I am new to processing and what I want does not seems to be working. I would like the script to run back through and choose the following file in the folder to be processed. At the moment it stops at the end and does not return to start on next file (there are three other modules also involved).
Any help would be much appreciated. :(
/* ASDFPixelSort for video frames v1.0
Original ASDFPixelSort by Kim Asendorf <http://kimasendorf.com>
https://github.com/kimasendorf/ASDFPixelSort
Fork by dx <http://dequis.org> and chinatsu <http://360nosco.pe>
// Main configuration
String basedir = ".../Images/Seq_002"; // Specify the directory in which the frames are located. Use forward slashes.
String fileext = ".jpg"; // Change to the format your images are in.
int resumeprocess = 0; // If you wish to resume a previously stopped process, change this value.
boolean reverseIt = true;
boolean saveIt = true;
int mode = 2; // MODE: 0 = black, 1 = bright, 2 = white
int blackValue = -10000000;
int brightnessValue = -1;
int whiteValue = -6000000;
// -------
PImage img, original;
float[][] sums;
int bottomIndex = 0;
String[] filenames;
int row = 0;
int column = 0;
int i = 0;
java.io.File folder = new java.io.File(dataPath(basedir));
java.io.FilenameFilter extfilter = new java.io.FilenameFilter() {
boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(fileext);
}
};
void setup() {
if (resumeprocess > 0) {i = resumeprocess - 1;frameCount = i;}
size(1504, 1000); // Resolution of the frames. It's likely there's a better way of doing this..
filenames = folder.list(extfilter);
size(1504, 1000);
println(" " + width + " x " + height + " px");
println("Creating buffer images...");
PImage hImg = createImage(1504, 1000, RGB);
PImage vImg = createImage(1504, 1000, RGB);
// draw image and convert to grayscale
if (i +1 > filenames.length) {println("Uh.. Done!"); System.exit(0);}
img = loadImage(basedir+"/"+filenames[i]);
original = loadImage(basedir+"/"+filenames[i]);
image(img, 0, 0);
filter(GRAY);
img.loadPixels(); // updatePixels is in the 'runKernals'
// run kernels to create "energy map"
println("Running kernals on image...");
runKernels(hImg, vImg);
image(img, 0, 0);
// sum pathways through the image
println("Getting sums through image...");
sums = getSumsThroughImage();
image(img, 0, 0);
loadPixels();
// get start point (smallest value) - this is used to find the
// best seam (starting at the lowest energy)
bottomIndex = width/2;
// bottomIndex = findStartPoint(sums, 50);
println("Bottom index: " + bottomIndex);
// find the pathway with the lowest information
int[] path = new int[height];
path = findPath(bottomIndex, sums, path);
for (int bi=0; bi<width; bi++) {
// get the pixels of the path from the original image
original.loadPixels();
color[] c = new color[path.length]; // create array of the seam's color values
for (int i=0; i<c.length; i++) {
try {
c[i] = original.pixels[i*width + path[i] + bi]; // set color array to values from original image
}
catch (Exception e) {
// when we run out of pixels, just ignore
}
}
println(" " + bi);
c = sort(c); // sort (use better algorithm later)
if (reverseIt) {
c = reverse(c);
}
for (int i=0; i<c.length; i++) {
try {
original.pixels[i*width + path[i] + bi] = c[i]; // reverse! set the pixels of the original from sorted array
}
catch (Exception e) {
// when we run out of pixels, just ignore
}
}
original.updatePixels();
}
// when done, update pixels to display
updatePixels();
// display the result!
image(original, 0, 0);
if (saveIt) {
println("Saving file...");
//filenames = stripFileExtension(filenames);
save("results/SeamSort_" + filenames + ".tiff");
}
println("DONE!");
}
// strip file extension for saving and renaming
String stripFileExtension(String s) {
s = s.substring(s.lastIndexOf('/')+1, s.length());
s = s.substring(s.lastIndexOf('\\')+1, s.length());
s = s.substring(0, s.lastIndexOf('.'));
return s;
}
This code works by processing all images in the selected folder
String basedir = "D:/things/pixelsortframes"; // Specify the directory in which the frames are located. Use forward slashes.
String fileext = ".png"; // Change to the format your images are in.
int resumeprocess = 0; // If you wish to resume a previously stopped process, change this value.
int mode = 1; // MODE: 0 = black, 1 = bright, 2 = white
int blackValue = -10000000;
int brightnessValue = -1;
int whiteValue = -6000000;
PImage img;
String[] filenames;
int row = 0;
int column = 0;
int i = 0;
java.io.File folder = new java.io.File(dataPath(basedir));
java.io.FilenameFilter extfilter = new java.io.FilenameFilter() {
boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(fileext);
}
};
void setup() {
if (resumeprocess > 0) {i = resumeprocess - 1;frameCount = i;}
size(1920, 1080); // Resolution of the frames. It's likely there's a better way of doing this..
filenames = folder.list(extfilter);
}
void draw() {
if (i +1 > filenames.length) {println("Uh.. Done!"); System.exit(0);}
row = 0;
column = 0;
img = loadImage(basedir+"/"+filenames[i]);
image(img,0,0);
while(column < width-1) {
img.loadPixels();
sortColumn();
column++;
img.updatePixels();
}
while(row < height-1) {
img.loadPixels();
sortRow();
row++;
img.updatePixels();
}
image(img,0,0);
saveFrame(basedir+"/out/"+filenames[i]);
println("Frames processed: "+frameCount+"/"+filenames.length);
i++;
}
essentially I want to do the same thing only with a different image process but my code is not doing this to all with in the folder... just one file.
You seem to be confused about what the setup() function does. It runs once, and only once, at the beginning of your code's execution. You don't have any looping structure for processing the other files, so it's no wonder that it only processes the first one. Perhaps wrap the entire thing in a for loop? It looks like you kind of thought about this, judging by the global variable i, but you never increment it to go to the next image and you overwrite its value in several for loops later anyway.

Is it possible to record sound played on the sound card?

Is it even remotely possible to record sound that is being played on the sound card?
Supposedly, I am playing an audio file, what I need is to redirect the output to the disk. DirectShow or might be feasible.
Any help is greatly appreciated, thanks.
You need to enable audio loopback device, and you will be able to record from in a stadnard way with all the well-known APIs (including DirectShow).
Loopback Recording
Enabling "Stereo Mix" in Vista
Capturing Window's audio in C#
Once enabled, you will see the device on DirectShow apps:
Check out NAudio and this thread for a WasapiLoopbackCapture class that takes the output from your soundcard and turns it into a wavein device you can record or whatever...
https://naudio.codeplex.com/discussions/203605/
My Solution C# NAUDIO Library v1.9.0
waveInStream = new WasapiLoopbackCapture(); //record sound card.
waveInStream.DataAvailable += new EventHandler<WaveInEventArgs>(this.OnDataAvailable); // sound data event
waveInStream.RecordingStopped += new EventHandler<StoppedEventArgs>(this.OnDataStopped); // record stopped event
MessageBox.Show(waveInStream.WaveFormat.Encoding + " - " + waveInStream.WaveFormat.SampleRate +" - " + waveInStream.WaveFormat.BitsPerSample + " - " + waveInStream.WaveFormat.Channels);
//IEEEFLOAT - 48000 - 32 - 2
//Explanation: IEEEFLOAT = Encoding | 48000 Hz | 32 bit | 2 = STEREO and 1 = mono
writer = new WaveFileWriter("out.wav", new WaveFormat(48000, 24, 2));
waveInStream.StartRecording(); // record start
Events
WaveFormat myOutFormat = new WaveFormat(48000, 24, 2); // --> Encoding PCM standard.
private void OnDataAvailable(object sender, WaveInEventArgs e)
{
//standard e.Buffer encoding = IEEEFLOAT
//MessageBox.Show(e.Buffer + " - " + e.BytesRecorded);
//if you needed change for encoding. FOLLOW WaveFormatConvert ->
byte[] output = WaveFormatConvert(e.Buffer, e.BytesRecorded, waveInStream.WaveFormat, myOutFormat);
writer.Write(output, 0, output.Length);
}
private void OnDataStopped(object sender, StoppedEventArgs e)
{
if (writer != null)
{
writer.Close();
}
if (waveInStream != null)
{
waveInStream.Dispose();
}
}
WaveFormatConvert -> Optional
public byte[] WaveFormatConvert(byte[] input, int length, WaveFormat inFormat, WaveFormat outFormat)
{
if (length == 0)
return new byte[0];
using (var memStream = new MemoryStream(input, 0, length))
{
using (var inputStream = new RawSourceWaveStream(memStream, inFormat))
{
using (var resampler = new MediaFoundationResampler(inputStream, outFormat)) {
resampler.ResamplerQuality = 60; // 1 low - 60 max high
//CONVERTED READ STREAM
byte[] buffer = new byte[length];
using (var stream = new MemoryStream())
{
int read;
while ((read = resampler.Read(buffer, 0, length)) > 0)
{
stream.Write(buffer, 0, read);
}
return stream.ToArray();
}
}
}
}
}

Can I get timecode from directshow video?

I'm trying to make a timecode counter for a video player based on GMFBridge and DirectShow.
I'm using a Timer to call GetCurrentPosition() every 200ms but I believe it's not accurate. I'd like at least to get the frame number (from start) of the current frame when a video is running.
Can this actually be done?
I'm using DirectShowLib .NET library.
To my knowledge this is hard to achieve, in a solution I work on I did the following to get 'frame number':
public int NumberOfFrames
{
get
{
return (int)(Duration / AverageTimePerFrame);
}
}
public double AverageTimePerFrame
{
get
{
return videoInfoHeader.AvgTimePerFrame / 10000000.0;
}
}
public int GetCurrentFrame(double currentTime)
{
int noOfFrames = (int)(Duration / AverageTimePerFrame);
return Convert.ToInt32(Math.Min(noOfFrames - 1, Math.Floor(currentTime / AverageTimePerFrame)));
}
I got the videoInfoHeader by doing:
// Get the media type from the SampleGrabber
AMMediaType media = new AMMediaType();
hr = sampGrabber.GetConnectedMediaType(media);
DsError.ThrowExceptionForHR(hr);
if ((media.formatType != FormatType.VideoInfo) || (media.formatPtr == IntPtr.Zero))
{
throw new NotSupportedException("Unknown Grabber Media Format");
}
// Grab the size info
videoInfoHeader = (VideoInfoHeader)Marshal.PtrToStructure(media.formatPtr, typeof(VideoInfoHeader));
DsUtils.FreeAMMediaType(media);
However this is obviously tailored to my own use-case, hopefully it helps you a bit though. Good luck!
Updated
Added CurrentTime code (the locker is for my own usage you can most likely remove that):
public double CurrentTime
{
set
{
lock (locker)
{
IMediaPosition mediaPos = fFilterGraph as IMediaPosition;
int hr;
if (value >= 0 && value <= Duration)
{
hr = mediaPos.put_CurrentPosition(value);
DsError.ThrowExceptionForHR(hr);
}
}
}
get
{
lock (locker)
{
IMediaPosition mediaPos = fFilterGraph as IMediaPosition;
int hr;
double currentTime;
hr = mediaPos.get_CurrentPosition(out currentTime);
DsError.ThrowExceptionForHR(hr);
return currentTime;
}
}
}

Resources